RSS
Last updated: May 13, 2026 at 5:42 AM UTC
All 208 Vulnerability 72 Breach 41 Threat 88 Defense 7
Tag: sysdig (2 articles)Clear

Hackers raced to exploit a critical LiteLLM flaw 36 hours after disclosure - any attacker who could reach the proxy could read all stored AI API keys (CVE-2026-42208)

LiteLLM, the popular open-source gateway used to centralize API access for OpenAI, Anthropic, and other AI providers, has a critical pre-authentication SQL injection bug that attackers started exploiting just 36 hours after the security advisory went public. The flaw lets anyone who can reach the proxy port read all the API keys stored inside - including master keys, virtual keys, and provider credentials. The bug was in the bearer-token check: the token was concatenated into a SQL query instead of passed as a parameter. Sysdig saw the first attack at 04:24 UTC on April 26, hitting three tables that hold the most valuable secrets.

Check
If you run any internet-facing LiteLLM proxy, patch to v1.83.7-stable today and treat every API key, virtual key, and stored provider credential as compromised.
Affected
LiteLLM versions 1.81.16 through 1.83.6, internet-reachable on the default proxy port. CVE-2026-42208, CVSS 9.3, pre-auth SQL injection. Blast radius is closer to a full cloud account compromise than a typical web app bug because LiteLLM holds OpenAI, Anthropic, and AWS Bedrock credentials.
Fix
Patch to LiteLLM v1.83.7-stable. If you can't upgrade, set 'disable_error_logs: true' under 'general_settings' as a workaround. Rotate every virtual key, master key, and upstream provider credential. Audit upstream provider billing for unexpected API calls since April 24. Block traffic from 65.111.27.132 and 65.111.25.67 (AS200373).

LMDeploy LLM-serving SSRF (CVE-2026-33626) exploited within 13 hours of disclosure - attackers used the vision-language image loader as a generic port-scanner against AWS metadata, Redis, and MySQL

Sysdig observed the first in-the-wild exploitation of CVE-2026-33626 against its honeypot fleet 12 hours and 31 minutes after the GitHub advisory went live on April 21. LMDeploy is Shanghai AI Laboratory's open source toolkit for serving vision-language and text LLMs. The flaw is in load_image() in lmdeploy/vl/utils.py: it fetches arbitrary URLs from the image_url field without validating link-local, loopback, or RFC1918 ranges. CVSS 7.5. The attacker used LMDeploy as a generic SSRF primitive over an eight-minute session - port-scanning AWS IMDS, localhost Redis, MySQL, and an admin interface. v0.12.3 fixes it.

Check
If your team runs LLM-serving infrastructure (LMDeploy, vLLM, TGI, Ollama, Ray Serve), audit it this week for unvalidated URL fetching and put proper egress filtering in place.
Affected
LMDeploy versions before 0.12.3 with vision-language support enabled. Cloud GPU inference deployments are at acute risk because the SSRF directly targets the metadata service - on a misconfigured node this yields IAM credentials with broad access to S3 model artifacts, training data, and cross-account roles.
Fix
Upgrade LMDeploy to 0.12.3+. On every cloud-hosted inference node, enforce IMDSv2 with token requirement (this alone defeats IAM exfil). Restrict outbound egress from GPU nodes to required destinations only. Block 169.254.169.254 from inference containers without a use case. Apply the same logic to vision-LLM image loaders, agent tool-use endpoints, and RAG fetchers. Block 103.116.72[.]119 at the edge.