RSS
Last updated: May 13, 2026 at 5:42 AM UTC
All 208 Vulnerability 72 Breach 41 Threat 88 Defense 7
Tag: anthropic (4 articles)Clear

Anthropic launches 'Claude Security' for enterprises - the first major defensive product designed to keep up with AI-powered exploits that compress the time-to-attack to minutes

Anthropic launched Claude Security in public beta yesterday, an enterprise tool that scans code repositories for vulnerabilities, rates each finding's severity and confidence, and generates patch instructions that engineers can apply through Claude Code. The launch is direct response to Mythos and similar AI-driven offensive tools that have been compressing the time between vulnerability disclosure and active exploitation - LiteLLM was exploited 36 hours after disclosure last week, LMDeploy in 13 hours the week before. CrowdStrike, Microsoft Security, Palo Alto Networks, SentinelOne, Trend, and Wiz are integrating Claude Opus 4.7 into their platforms.

Check
If your organization holds a Claude Enterprise subscription, evaluate Claude Security against your existing static analysis tools this week.
Affected
Claude Enterprise customers can access Claude Security in public beta now via claude.ai/security or the Claude.ai sidebar. No API integration required. Team and Max access is coming soon. The deeper relevance is for any security team facing the new exploitation cadence: AI-driven offense has shrunk the patch window for several recent disclosures.
Fix
Pilot Claude Security on a non-critical repository first - point it at a side project before pointing it at production code. Scheduled scans give ongoing coverage rather than one-off audits. Pair the output with Claude Code on the Web to work through patches in a single session. For organizations not on Claude Enterprise: evaluate Aisle, Wiz Code, or GitHub Copilot Autofix on confidence rating and false positive rate.

Hackers raced to exploit a critical LiteLLM flaw 36 hours after disclosure - any attacker who could reach the proxy could read all stored AI API keys (CVE-2026-42208)

LiteLLM, the popular open-source gateway used to centralize API access for OpenAI, Anthropic, and other AI providers, has a critical pre-authentication SQL injection bug that attackers started exploiting just 36 hours after the security advisory went public. The flaw lets anyone who can reach the proxy port read all the API keys stored inside - including master keys, virtual keys, and provider credentials. The bug was in the bearer-token check: the token was concatenated into a SQL query instead of passed as a parameter. Sysdig saw the first attack at 04:24 UTC on April 26, hitting three tables that hold the most valuable secrets.

Check
If you run any internet-facing LiteLLM proxy, patch to v1.83.7-stable today and treat every API key, virtual key, and stored provider credential as compromised.
Affected
LiteLLM versions 1.81.16 through 1.83.6, internet-reachable on the default proxy port. CVE-2026-42208, CVSS 9.3, pre-auth SQL injection. Blast radius is closer to a full cloud account compromise than a typical web app bug because LiteLLM holds OpenAI, Anthropic, and AWS Bedrock credentials.
Fix
Patch to LiteLLM v1.83.7-stable. If you can't upgrade, set 'disable_error_logs: true' under 'general_settings' as a workaround. Rotate every virtual key, master key, and upstream provider credential. Audit upstream provider billing for unexpected API calls since April 24. Block traffic from 65.111.27.132 and 65.111.25.67 (AS200373).

Anthropic MCP STDIO design flaw exposes 200,000+ AI servers to RCE - 14 CVEs assigned, Anthropic calls it 'expected behavior' (backfill from April 15)

Backfill from April 15: OX Security disclosed an architectural flaw in the official Model Context Protocol SDKs (Python, TypeScript, Java, Rust) that lets attacker-controlled JSON config trigger arbitrary OS commands via the STDIO transport. Roughly 200,000 publicly reachable MCP servers and 150 million SDK downloads inherit the issue. OX has tied 14 CVEs to the same root cause across LiteLLM (patched), Bisheng (patched), Windsurf (zero-click RCE in Cursor-style IDEs, still reported), Flowise, LangFlow, GPT Researcher, Agent Zero, and DocsGPT. Anthropic declined to patch the protocol, calling the behavior 'expected.'

Check
Audit every MCP server installed in Claude Code, Cursor, and other AI dev tools, remove any whose origin you don't recognize, and treat MCP configs as executable code.
Affected
Any tool or service running an Anthropic-SDK MCP server with STDIO transport, especially when add/configure flow is exposed to user input or marketplaces. Confirmed-affected: LiteLLM, LangChain, LangFlow, Flowise, LettaAI, LangBot, DocsGPT, Bisheng, Windsurf, Cursor IDE workflows, GPT Researcher, plus any private MCP server built on the official SDK without input sanitization.
Fix
Patch downstream tools to fixed versions (LiteLLM, Bisheng, Cursor). Block public internet access to services that host MCP add/configure UIs. Treat all external MCP configuration input as untrusted; never let raw user input reach StdioServerParameters. Run MCP services in sandboxes with no production-secret access. Install MCP servers only from verified sources and pin to specific commits.

A small Discord group quietly accessed Anthropic's most powerful AI hacking tool 'Mythos' for two weeks via a contractor account (backfill from April 21)

Backfill from April 21: Anthropic confirmed an unauthorized Discord group quietly accessed Mythos - the company's most powerful AI cybersecurity tool, restricted to about 40 vetted partners including Apple, Microsoft, and Google. The group got in on the same day Mythos was announced (April 7) by piggybacking on a member who works at one of Anthropic's third-party contractors, then guessed the model's URL based on naming patterns from previously leaked information. Anthropic says the group used Mythos to build websites, not for attacks - but they had quiet access for two weeks. Mozilla used Mythos to find and patch 271 Firefox bugs.

Check
If you're a Project Glasswing partner, audit which contractor environments have access to Mythos and rotate any credentials they used since April 7.
Affected
Anthropic Project Glasswing partners (about 40 organizations including Apple, Microsoft, Google, Mozilla, Cisco) and their downstream contractors. Any organization granting AI tool access to third-party contractors without isolation - the same naming-pattern guess works if your past internal models have been leaked, making new models' URLs predictable.
Fix
For partners: rotate all credentials any contractor environment used to reach Mythos, audit Mythos query logs for unfamiliar patterns, segment contractor access from production AI tooling. For everyone: assume new AI tool URLs that follow your existing naming convention are guessable, randomize URL paths for restricted models, and treat third-party contractor accounts as a primary attack surface.