Vercel April 2026 security incident
bleepingcomputer.com678 points by colesantiago 16 hours ago
678 points by colesantiago 16 hours ago
https://vercel.com/kb/bulletin/vercel-april-2026-security-in...
When one OAuth token can compromise dev tools, CI pipeline, secrets and deployment simultaneously, something architectural has gone wrong.
Vercel have had React2Shell (CVSS 10), the middleware bypass (CVSS 9.1), and now this, all within 12 months. At what point do we start asking questions about the concentration of trust in the web ecosystem? It's funny that at the engineering level we are continuously grilled in interviews about the single responsibility principle, meanwhile the industry's business model is to undermine the entirety of web standards and consolidate the web stack into a CLI. Coming from a company that makes infrastructure out of a view layer / vDOM library - I think anyone relying on Vercel has only themselves to blame. You have no idea how indifferent security officers can be-even when you point out critical issues. The other day, we flagged that a customer’s database had users with excessive privileges. Their only question: “Can this be exploited from the outside?” No, but most breaches today come from compromised internal accounts that are then used to break everything. What's the problem to have local API connected in HTTP? We are within the enterprise network. And that's how I passed for a annoying "PM". With half of the program management complaining that I was slowing down things until 6m later, the head of risk management told them to get lost. Polite reminder as to why Domain Driven Design is super-important. It makes more sense to spend 80% on DDD initially and then only 20% on the code (80-20 rule) vs the other way round. Or you will end up in a clusterfuck like this. JavaScript living only as a built artifact in an s3 bucket makes for a much simpler life. The whole hiring system needs to be eradicated. You get grilled by incompetents, who ask one question, never ask back when you provide something that is debatable, they give zero feedback and then you see what kind of errors these "elitist" engineers make. Burn it to the ground. Best hiring systems I saw were when actual engineers hiring for their team were doing the bulk. You get a gauge of what you can expect and them too. They just added more details: > Indicators of compromise (IOCs) > Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations. > We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately. > OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com https://vercel.com/kb/bulletin/vercel-april-2026-security-in... https://x.com/rauchg/status/2045995362499076169 > A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using. > Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments. > We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration. > We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel. Still no email blast from Vercel alerting users, which is concerning. > We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel. Blame it on AI ... trust me... it would have never happened if it wasn't for AI. > Still no email blast from Vercel alerting users, which is concerning. On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams. But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information. > On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas. And for this company specifically, Guillermo is not one to defer to comms or legal. > the CEO can't just write a mass email without approval from legal or other comms teams. They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something. > the CEO can't just write a mass email without approval from legal or other comms teams Wouldn't the CEO be... you know... the chief executive? Sure, and the reason he is is because he DOES check stuff like this before sending it out. Top leaders excel because they assemble a team around them they trust. You can't do everything yourself, you need to delegate. And having people in those positions also means you shouldn't be acting alone or those people will not stick around I disagree. In a crisis, a leader should take the lead and make decisions. If he/she is not able to that on their own, they are in the wrong place. Now I will agree that there are many executives like the ones you describe. But they are not top leaders. Has anyone actually gotten an email from Vercel confirming their secrets were accessed? Right now we're all operating under the hope (?) that since we haven't (yet?) gotten an email, we're not completely hosed. Hope-based security should not be a thing. Did you rotate your secrets? Did you audit your platform for weird access patterns? Don’t sit waiting for that vercel email. > Did you rotate your secrets? For most secrets they are under your control so, sure, go ahead and rotate them, allowing the old version to continue being used in parallel with the new version for 30 minutes or so. For other secrets, rotation involves getting a new secret from some upstream provider and having some services (users of that secret) fail while the secret they have in cache expires. For example, if your secret is a Stripe key; generating a new key should invalidate the old one (not too sure, I don't use Stripe), at which point the services with the cached secret will fail until the expiry. Of course rotated. But we don't even know when the secrets were stolen vs we were told, so we're missing a ton of info needed to _fully_ triage. nope...I feel u, the "Hope-based security" is exactly what Vercel is forcing on its users right now by prioritizing social media over direct notification. If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken. I'm going down with the ship over on X.com the Everything App. There's a parcel of very important tech people that are running some playbook where posting to X.com is sufficient enough to be unimpeachable on communication, despite its rather beleaguered state and traffic. Usually, companies have procedures for such events. But most do not. Usually have procedures, but most don't? Say again The disaster plan says there is a process, but it has never been used and is probably outdated. Chances are the social media strategy requires posting on the Facebook and updating key Circles on Google+ Production network control plane must be completely isolated from the internet with a separate computer for each. The design I like best is admins have dedicated admin workstations that only ever connect to the admin network, corporate workstations, and you only ever connect to the internet from ephemeral VMs connected via RDP or similar protocol. The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was. I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner. It indeed was deleted as this URL shows:
https://accounts.google.com/o/oauth2/v2/auth?client_id=11067... Makes it even more relevant to have the actual app or vendor name - who’s to say they just removed it to save face and won’t add it later? I don’t understand why they can’t just directly name the responsible app as it will come out eventually. Maybe legal red tape? Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or… Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through. Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help. This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks. Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct? This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended. We need a different hosting model. Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web. Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious. > Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web. GP said it's about taking the Unix philosophy to extremes, you say something different. Anything taken to extremes is bad; the key word there is "extremes". There is nothing wrong with the Unix philosophy, as "do one thing and do it well" never meant "thousands of dependencies over which you have no control, pulled in without review or thought". The Unix model wasn't simply do one thing and do it well. It was also a different model on ownership and vetting of those focused tools. It might have been a model of having the single source tree of an old UNIX or BSD, where everything was managed as a coherent whole from grep to cc all the way to X11. Or it might have been the Linux distribution model of having dedicated packagers do the vetting to piecemeal packages into more of a bazaar, even going so far as to rip scripting language bundles into their component pieces as for Python and Perl. But in both of those models you were put farther away from the third-party authors bringing software into the open-source (and proprietary) supply chains. This led to a host of issues with getting new software to users and with a fractal explosion of different versions of software dependencies to potentially have to work around, which is one reason we saw the explosion of NPM and Cargo and the like. Especially once Docker made it easy to go straight from stitching an app together with NPM on your local dev seat to getting it deployed to prod. But the issue isn't with focused tooling as much as it is with hewing more closely to the upstream who could potentially be subverted in a supply chain attack. After all, it's not as if people never tried to do this with Linux distros (or even the Linux kernel itself -- see for instance https://linux.slashdot.org/story/03/11/06/058249/linux-kerne... ). But the inherent delay and indirection in that model helped make it less of a serious risk. But even if you only use 1 NPM package instead of 100, if it's a big enough package you can assume it's going to be a large target for attacks. I do not see what this has to do with Unix. The problem is not that programs interoperate or handle text streams, the problem is a) the supply chain issues in modern web-software (and thanks to Rust now system-level) development and b) that web applications do not run under user permissions but work for the user using token-based authentication schemes. It's not a hosting model, it's a fundamental failure of software design and systems engineering/architecture. Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc. When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue. > We need a different hosting model. There really isn't an option here, IMO. 1. Somebody does it 2. You do it Much happier doing it myself tbh. There's a lot of wiggle room on how you define "it". At the ends of the spectrum it's obvious, but in the middle it gets a bit sticky. In my mind the unix philosophy leads to running your cloud on your own hardware or VPS's, not this. exactly this, write - not use some sh*t written by some dude from Akron OH 2 years ago” That's why I wrote my own compiler and coreutils. Can't trust some shit written by GNU developers 30 years ago. And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago. And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever. Yeah definitely no difference between GNU coreutils and some vibe coded AI tool released last month that wants full oAuth permissions. I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both. The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about. Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away. "endless list of CVE" seems rather exaggerated for coreutils. There are only very few CVEs in the last decade and most seem rather harmless. Now I'd genuinely like to know whether "yes" had a CVE assigned, not sure how to search for it though... I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service." You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between. And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.
Vates - 5 hours ago
isodev - an hour ago
nnurmanov - 4 hours ago
Foobar8568 - 2 hours ago
neya - an hour ago
piyh - 3 hours ago
lofaszvanitt - an hour ago
Neikius - 16 minutes ago
nettlin - 11 hours ago
ryanscio - 7 hours ago
_pdp_ - 13 minutes ago
cowsup - 7 hours ago
gk1 - 5 hours ago
loloquwowndueo - 7 hours ago
steve1977 - an hour ago
hvb2 - an hour ago
steve1977 - 39 minutes ago
eclipticplane - 6 hours ago
loloquwowndueo - 6 hours ago
lelanthran - 23 minutes ago
eclipticplane - 6 hours ago
ItsClo688 - 6 hours ago
refulgentis - 5 hours ago
nnurmanov - 4 hours ago
gnabgib - 4 hours ago
wombatpm - an hour ago
UltraSane - 3 hours ago
loloquwowndueo - 11 hours ago
progbits - 11 hours ago
tom1337 - 11 hours ago
loloquwowndueo - 10 hours ago
cebert - 11 hours ago
SaltyBackendGuy - 10 hours ago
brookst - 7 hours ago
slopinthebag - 11 hours ago
lijok - 11 hours ago
pianopatrick - 5 hours ago
lelanthran - an hour ago
mpyne - 3 hours ago
uecker - 2 hours ago
0xbadcafebee - 5 hours ago
esseph - 8 hours ago
fragmede - 5 hours ago
slopinthebag - 11 hours ago
bdangubic - 10 hours ago
arcfour - 10 hours ago
slopinthebag - 10 hours ago
eddythompson80 - 10 hours ago
uecker - 2 hours ago
rzzzt - 2 hours ago
arcfour - 8 hours ago