A critical pre-authentication SQL injection vulnerability in BerriAI’s LiteLLM Python package came under active exploitation within 36 hours of public disclosure, allowing unauthenticated attackers to extract sensitive credentials from the widely-used open-source artificial intelligence gateway.
The vulnerability tracked as CVE-2026-42208 with a CVSS score of 9.3 enables unauthorized database access that could compromise cloud provider credentials managing five-figure monthly spending limits, according to security researchers at Sysdig who detected the first exploitation attempt on April 26, 2026 at 16:17 UTC following the GitHub advisory indexing on April 24.
The SQL injection flaw exists within LiteLLM’s proxy API key verification process where a database query mixes caller-supplied key values directly into query text instead of passing them as separate parameters.
An unauthenticated attacker can send a specially crafted Authorization header to any large language model API route such as POST /chat/completions and reach the vulnerable query through the proxy’s error-handling path, allowing them to read data from the proxy’s database and potentially modify it.
LiteLLM serves as a centralized proxy for major language model providers including OpenAI, Anthropic and AWS Bedrock, managing API routing and billing while storing high-value secrets including master API keys and enterprise cloud credentials. The platform has over 22,000 GitHub stars and is widely deployed by organizations building LLM applications and platforms managing multiple models across different providers.
Security researchers observed targeted exploitation attempts originating from IP address 65.111.27[.]132 demonstrating advanced knowledge of LiteLLM’s internal structure. The unknown threat actor specifically targeted database tables including litellm_credentials.credential_values and litellm_config containing upstream large language model provider keys and proxy runtime environment information.
BerriAI released version 1.83.7-stable on April 19, 2026 addressing the vulnerability by replacing string interpolation with parameterized queries. Organizations unable to immediately patch can implement a workaround by setting disable_error_logs: true under general_settings configuration.
However, LiteLLM maintainers strongly recommend upgrading to the latest version and treating any internet-facing instance running vulnerable versions during the exposure window as compromised, requiring rotation of all virtual API keys, master keys and provider credentials.
