RSS
Last updated: May 13, 2026 at 5:42 AM UTC
All 208 Vulnerability 72 Breach 41 Threat 88 Defense 7
Tag: ollama (1 article)Clear

Critical Ollama flaw lets unauthenticated attackers read server memory - 300,000 instances exposed (CVE-2026-7482)

Researchers at Cyera disclosed a critical bug in Ollama, the open-source tool that runs large language models locally on laptops and servers. The flaw, called Bleeding Llama (CVE-2026-7482), lets anyone with network access send a malformed model file and read raw process memory back - which typically contains API keys, environment variables, system prompts, and other users' chat history. Ollama ships without authentication by default, so an estimated 300,000 instances are exposed on the internet. Ollama 0.17.1 fixes it. Separately, Striga disclosed two unpatched Ollama Windows desktop flaws (CVE-2026-42248 and CVE-2026-42249) that chain into persistent code execution at login.

Check
Inventory all Ollama instances across servers and developer laptops. Check whether any are reachable from outside their host or trusted network, and verify the running version.
Affected
Ollama versions before 0.17.1 on every platform (CVE-2026-7482, CVSS 9.1, unauthenticated heap out-of-bounds read in the GGUF model loader exploitable via /api/create and /api/push). Ollama Windows desktop client on all currently-released builds (CVE-2026-42248 and CVE-2026-42249, CVSS 7.7 each, unpatched). Internet-exposed and developer-laptop instances are at highest risk.
Fix
Upgrade all Ollama servers to 0.17.1 or later immediately to fix Bleeding Llama. Restrict the Ollama API to localhost or an internal network only - never expose port 11434 to the internet. Place an authenticating reverse proxy in front of any shared Ollama deployment. For Windows desktop clients, monitor for an update that addresses CVE-2026-42248 and CVE-2026-42249; consider blocking auto-update traffic until a patched build ships.