RSS
Last updated: May 14, 2026 at 10:49 AM UTC
All 219 Vulnerability 76 Breach 45 Threat 91 Defense 7
Tag: backfill (2 articles)Clear

Anthropic MCP STDIO design flaw exposes 200,000+ AI servers to RCE - 14 CVEs assigned, Anthropic calls it 'expected behavior' (backfill from April 15)

Backfill from April 15: OX Security disclosed an architectural flaw in the official Model Context Protocol SDKs (Python, TypeScript, Java, Rust) that lets attacker-controlled JSON config trigger arbitrary OS commands via the STDIO transport. Roughly 200,000 publicly reachable MCP servers and 150 million SDK downloads inherit the issue. OX has tied 14 CVEs to the same root cause across LiteLLM (patched), Bisheng (patched), Windsurf (zero-click RCE in Cursor-style IDEs, still reported), Flowise, LangFlow, GPT Researcher, Agent Zero, and DocsGPT. Anthropic declined to patch the protocol, calling the behavior 'expected.'

Check
Audit every MCP server installed in Claude Code, Cursor, and other AI dev tools, remove any whose origin you don't recognize, and treat MCP configs as executable code.
Affected
Any tool or service running an Anthropic-SDK MCP server with STDIO transport, especially when add/configure flow is exposed to user input or marketplaces. Confirmed-affected: LiteLLM, LangChain, LangFlow, Flowise, LettaAI, LangBot, DocsGPT, Bisheng, Windsurf, Cursor IDE workflows, GPT Researcher, plus any private MCP server built on the official SDK without input sanitization.
Fix
Patch downstream tools to fixed versions (LiteLLM, Bisheng, Cursor). Block public internet access to services that host MCP add/configure UIs. Treat all external MCP configuration input as untrusted; never let raw user input reach StdioServerParameters. Run MCP services in sandboxes with no production-secret access. Install MCP servers only from verified sources and pin to specific commits.

A small Discord group quietly accessed Anthropic's most powerful AI hacking tool 'Mythos' for two weeks via a contractor account (backfill from April 21)

Backfill from April 21: Anthropic confirmed an unauthorized Discord group quietly accessed Mythos - the company's most powerful AI cybersecurity tool, restricted to about 40 vetted partners including Apple, Microsoft, and Google. The group got in on the same day Mythos was announced (April 7) by piggybacking on a member who works at one of Anthropic's third-party contractors, then guessed the model's URL based on naming patterns from previously leaked information. Anthropic says the group used Mythos to build websites, not for attacks - but they had quiet access for two weeks. Mozilla used Mythos to find and patch 271 Firefox bugs.

Check
If you're a Project Glasswing partner, audit which contractor environments have access to Mythos and rotate any credentials they used since April 7.
Affected
Anthropic Project Glasswing partners (about 40 organizations including Apple, Microsoft, Google, Mozilla, Cisco) and their downstream contractors. Any organization granting AI tool access to third-party contractors without isolation - the same naming-pattern guess works if your past internal models have been leaked, making new models' URLs predictable.
Fix
For partners: rotate all credentials any contractor environment used to reach Mythos, audit Mythos query logs for unfamiliar patterns, segment contractor access from production AI tooling. For everyone: assume new AI tool URLs that follow your existing naming convention are guessable, randomize URL paths for restricted models, and treat third-party contractor accounts as a primary attack surface.