Google's Threat Intelligence Group says it caught the first known case of a real attacker using a large language model to find and weaponize a zero-day - a 2FA bypass in a popular but unnamed open-source web-based system administration tool. Google has high confidence the Python exploit was AI-generated, citing textbook code structure, abundant educational docstrings, and a hallucinated CVSS score in the script. The flaw was a high-level logic bug, the kind LLMs excel at spotting, rather than a memory corruption issue. Google rules out Gemini and warns that AI-assisted exploit development is being industrialized via account-pooling and proxy relays for premium models.
Researchers at Cyera disclosed a critical bug in Ollama, the open-source tool that runs large language models locally on laptops and servers. The flaw, called Bleeding Llama (CVE-2026-7482), lets anyone with network access send a malformed model file and read raw process memory back - which typically contains API keys, environment variables, system prompts, and other users' chat history. Ollama ships without authentication by default, so an estimated 300,000 instances are exposed on the internet. Ollama 0.17.1 fixes it. Separately, Striga disclosed two unpatched Ollama Windows desktop flaws (CVE-2026-42248 and CVE-2026-42249) that chain into persistent code execution at login.