Google's Threat Intelligence Group says it caught the first known case of a real attacker using a large language model to find and weaponize a zero-day - a 2FA bypass in a popular but unnamed open-source web-based system administration tool. Google has high confidence the Python exploit was AI-generated, citing textbook code structure, abundant educational docstrings, and a hallucinated CVSS score in the script. The flaw was a high-level logic bug, the kind LLMs excel at spotting, rather than a memory corruption issue. Google rules out Gemini and warns that AI-assisted exploit development is being industrialized via account-pooling and proxy relays for premium models.
Attackers didn't wait for a proof-of-concept. Within 20 hours of CVE-2026-33017 being disclosed in Langflow - an open-source AI workflow builder with 145K+ GitHub stars - they built working exploits straight from the advisory. One crafted HTTP POST to the public flow endpoint is all it takes, no credentials needed. Compromised instances leak API keys for OpenAI, AWS, and connected databases.