Vercel updated its security bulletin on April 23 to disclose that ongoing forensics has uncovered additional customer accounts compromised in the Context.ai-linked breach that went public on April 19, and - more worryingly - a separate cluster of customer accounts with evidence of compromise that predates and appears unconnected to the Context.ai incident. CEO Guillermo Rauch confirmed on X that the threat actor has been active beyond Context.ai's compromise. Hudson Rock's forensic report traced patient-zero to a Context.ai employee whose laptop was infected by Lumma Stealer in February 2026 after downloading Roblox auto-farm scripts - a roughly four-week dwell time before the operator pivoted into Context.ai's AWS environment and then through OAuth tokens into Vercel's Google Workspace. The stolen credential set from that single laptop included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Vercel has now confirmed non-sensitive environment variables in affected team scopes were readable to the attacker, and says customer notifications are going out individually rather than via a public list.
Follow-up: this is the origin-story update to the Vercel breach disclosed April 19 (which our publication did not cover at the time). Hudson Rock traced the initial compromise to a Context.ai employee whose laptop was infected by Lumma Stealer malware in February 2026 after the user downloaded Roblox 'auto-farm' scripts and game-exploit executors - a notorious delivery vector for infostealers. The malware harvested that employee's Google Workspace credentials plus access keys and logins for Supabase, Datadog, and Authkit. The haul also included the support@context.ai account, letting the attacker escalate inside Context.ai, reach its AWS environment, and then pivot through compromised Google Workspace OAuth tokens into a Vercel employee's enterprise workspace that had granted the 'AI Office Suite' app 'Allow All' permissions. The attacker (ShinyHunters, now selling the data for $2M on BreachForums) read Vercel environment variables not flagged as 'sensitive.' Google pulled the Context.ai Chrome extension (ID omddlmnhcofjbnbflmjginpjjblphbgk) on March 27 - it embedded an OAuth grant for read access to users' entire Google Drive. The lesson is brutal: one employee's personal risky behavior on a work device cascaded through four SaaS platforms into a supply-chain breach that a threat actor is now auctioning.
Cloud development platform Vercel disclosed a security incident on April 19 after a threat actor claiming to be ShinyHunters posted stolen data for sale on a hacking forum. Vercel CEO Guillermo Rauch confirmed the initial access came through a breach at Context.ai, an enterprise AI platform one Vercel employee had signed up for using their Vercel enterprise account with 'Allow All' OAuth permissions. Attackers compromised Context.ai, stole the OAuth token, took over the employee's Google Workspace account, and pivoted into Vercel environments. Once inside, they accessed environment variables not marked as 'sensitive' - these are stored unencrypted at rest, unlike sensitive env vars which Vercel encrypts. The attacker posted 580 employee records (names, emails, account status, activity timestamps) as a teaser, plus screenshots of an internal Vercel Enterprise dashboard. They claim to also have access keys, source code, database data, and API keys, though Vercel characterizes impact as a 'limited subset' of customers. Mandiant is engaged. This is the cleanest real-world example to date of the AI supply chain risk pattern everyone has been warning about: a third-party AI tool with broad OAuth scopes becomes the initial access vector into your primary infrastructure.