RSS
Last updated: May 13, 2026 at 5:42 AM UTC
All 208 Vulnerability 72 Breach 41 Threat 88 Defense 7
Tag: llm (2 articles)Clear

Google says hackers used AI to build first known zero-day for 2FA bypass in unnamed web admin tool

Google's Threat Intelligence Group says it caught the first known case of a real attacker using a large language model to find and weaponize a zero-day - a 2FA bypass in a popular but unnamed open-source web-based system administration tool. Google has high confidence the Python exploit was AI-generated, citing textbook code structure, abundant educational docstrings, and a hallucinated CVSS score in the script. The flaw was a high-level logic bug, the kind LLMs excel at spotting, rather than a memory corruption issue. Google rules out Gemini and warns that AI-assisted exploit development is being industrialized via account-pooling and proxy relays for premium models.

Check
Audit open-source web-based system administration tools your team self-hosts (Webmin, Cockpit, ISPConfig, etc). Check whether 2FA is the only barrier protecting admin access, and review recent admin logins for anomalies.
Affected
The specific affected product remains undisclosed - Google notified the developer and the attack was disrupted pre-mass-exploitation. Generally, any popular open-source web-based system administration tool with a 2FA implementation that relies on a semantic logic check rather than tightly-bound session validation is exposed to this class of AI-discovered logic bug.
Fix
Wait for vendor disclosure when Google's reporting names the product. In the meantime, layer additional controls in front of any web admin panel: place it behind a VPN or zero-trust gateway, require source-IP allowlisting, and rotate admin credentials. Treat 2FA-only protection on internet-exposed admin tools as a single point of failure regardless of the vendor.

Critical Ollama flaw lets unauthenticated attackers read server memory - 300,000 instances exposed (CVE-2026-7482)

Researchers at Cyera disclosed a critical bug in Ollama, the open-source tool that runs large language models locally on laptops and servers. The flaw, called Bleeding Llama (CVE-2026-7482), lets anyone with network access send a malformed model file and read raw process memory back - which typically contains API keys, environment variables, system prompts, and other users' chat history. Ollama ships without authentication by default, so an estimated 300,000 instances are exposed on the internet. Ollama 0.17.1 fixes it. Separately, Striga disclosed two unpatched Ollama Windows desktop flaws (CVE-2026-42248 and CVE-2026-42249) that chain into persistent code execution at login.

Check
Inventory all Ollama instances across servers and developer laptops. Check whether any are reachable from outside their host or trusted network, and verify the running version.
Affected
Ollama versions before 0.17.1 on every platform (CVE-2026-7482, CVSS 9.1, unauthenticated heap out-of-bounds read in the GGUF model loader exploitable via /api/create and /api/push). Ollama Windows desktop client on all currently-released builds (CVE-2026-42248 and CVE-2026-42249, CVSS 7.7 each, unpatched). Internet-exposed and developer-laptop instances are at highest risk.
Fix
Upgrade all Ollama servers to 0.17.1 or later immediately to fix Bleeding Llama. Restrict the Ollama API to localhost or an internal network only - never expose port 11434 to the internet. Place an authenticating reverse proxy in front of any shared Ollama deployment. For Windows desktop clients, monitor for an update that addresses CVE-2026-42248 and CVE-2026-42249; consider blocking auto-update traffic until a patched build ships.