Vercel Breach: The Risks of Shadow AI OAuth Exposed
The Vercel breach highlights the danger of Shadow AI integrations: how a forgotten OAuth token opened corporate doors. Here is what you need to know.

A single employee, a deprecated AI app, and persistent OAuth tokens: these were the ingredients of the security breach that hit Vercel in the spring of 2026. The incident, made public on April 19 via an official bulletin, demonstrates how unauthorized AI integrations represent an increasingly critical attack surface for modern organizations.
The attack chain originated from Context.ai, a third-party AI tool used by a Vercel employee. What makes this case a textbook example is that Vercel was not even a registered customer of Context.ai: it involved a deprecated consumer product called "AI Office Suite," an OAuth connection that remained active long after the tool itself had been forgotten.
The attack timeline: from a Roblox cheat to the Vercel exfiltration
According to available reconstructions, it all began around February 2026, when a Context.ai employee contracted a Lumma Stealer infection, a type of infostealer malware. The source of the infection was traced back to the worker searching for cheats for the video game Roblox. The malware exfiltrated corporate credentials, session tokens, and OAuth tokens from the compromised environment.
In March 2026, the attacker exploited these credentials to access Context.ai's AWS environment. From there, they exfiltrated consumer users' OAuth tokens, including the Google Workspace token of a Vercel employee who had previously authorized the application. This OAuth access allowed the attacker to enter the Vercel employee's Google Workspace account and subsequently pivot toward the company's internal systems.
Between March and April 2026, the intruder began enumerating the environment variables of Vercel customers. The compromised employee had significant access: internal dashboards, employee records, API keys, NPM tokens, and GitHub credentials. On April 10, 2026, OpenAI notified a Vercel customer about a leaked API key, a report that helped bring the breach to light.
Vercel published its security bulletin on April 19, 2026. CEO Guillermo Rauch confirmed the attack chain, identifying Context.ai as the compromised third party. The attacker reportedly demanded a ransom of $2 million. According to some sources, an actor affiliated with ShinyHunters began selling Vercel data on BreachForums, though this circumstance has not been independently verified.
Shadow AI integration: when forgotten apps become open doors
Analysts at Push Security described the incident as a prime example of "Shadow AI integration": it is not just about the use of unapproved AI tools, but the creation of persistent OAuth bridges that remain active even when the application is forgotten. As analysts explained: "In the Vercel case, we're talking specifically about shadow integrations. But all of these present a key risk to your organization."
The phenomenon is widespread. According to collected data, an average of 17 unique AI app integrations per organization are observed across Microsoft and Google. Every new OAuth connection increases the attack surface. BleepingComputer emphasized: "OAuth integrations are becoming one of the most reliably abused attack surfaces in enterprise environments, and every new AI tool your employees connect makes the web a little wider."
Vercel's security team described the attacker as "highly sophisticated based on their operational velocity and in-depth understanding of Vercel's product API surface." This operational speed and deep understanding of the APIs allowed the intruder to move rapidly through internal systems.
The broader context: OAuth attacks on the rise
The Vercel incident fits into a worrying trend. Device code phishing attacks have increased 37-fold this year. In 2025, the "Scattered Lapsus$ Hunters" attack impacted over 1,000 organizations, with approximately 1.5 billion records stolen. These numbers highlight how OAuth credentials and third-party integrations have become preferred attack vectors.
The dwell time—the time elapsed between the initial infection and disclosure—was approximately 2 months in the Vercel case. Trend Micro had initially reported a compromise in June 2024 with a dwell time of 22 months, but subsequently corrected its data: the infection occurred in February 2026.
Push Security highlighted the reputational consequences for Context.ai: "You definitely don't want to be Context.ai in this scenario. The reputational harm could be pretty significant, and is a wake-up call for other SaaS vendors to check that their house is in order."
Implications for corporate security
The incident raises critical questions for organizations adopting AI tools. The problem is not just the use of unauthorized applications, but the persistence of OAuth authorizations. When an employee authorizes an AI app to access their workspace, it creates a channel that remains open until it is manually revoked. Even if the app is abandoned or deprecated, the token remains valid.
In the Vercel case, the connected Context.ai application was a deprecated consumer product that had no business relationship with the company. Yet, the OAuth token authorized by a single employee was enough to provide the attacker with a critical entry point. It is the modern version of the orphaned API key problem, amplified by the speed at which workers adopt new AI tools.
Organizations must consider that every OAuth integration represents a potential point of failure. Centralized authorization management, periodic token rotation, and regular auditing of third-party connections are becoming essential practices.
Frequently Asked Questions
- What is a Shadow AI integration?
- It is an OAuth integration with an unauthorized AI application that remains active even after the employee has stopped using it. These persistent connections can be exploited by attackers to access corporate environments.
- How long did the Vercel breach last?
- The dwell time was approximately 2 months, from the initial infection in February 2026 to the publication of the security bulletin on April 19, 2026.
- How did the attack on Context.ai originate?
- A Context.ai employee was infected by Lumma Stealer malware while searching for cheats for the video game Roblox. The malware exfiltrated credentials and OAuth tokens from the corporate environment.
- What is the risk of AI OAuth integrations?
- Every OAuth integration creates a persistent bridge between the app and corporate data. If the app is compromised or abandoned, the token remains valid until manually revoked, exposing the organization to risks of unauthorized access.
This article is a summary based exclusively on the listed sources.
Sources
- https://vercel.com/kb/bulletin/vercel-april-2026-security-incident
- https://www.tomshw.it/hardware/adt-violata-sicurezza-milioni-dati
- https://prothect.it/tecnologia/vercel-conferma-violazione-sicurezza-hacker-vendono-dati-rubati/
- https://www.weex.com/it/wiki/article/vercel-security-incident-what-happened-who-was-affected-and-what-to-do-next-99037
- https://www.netcrook.com/vercel-contextai-violazione-catena-ombra