Replit's CEO apologizes after its AI agent wiped a company's code base

businessinsider.com

155 points by jgalt212 8 hours ago


andsoitis - 8 hours ago

https://archive.is/Z5hBQ

sReinwald - 7 hours ago

This is a perfect case study in why AI coding tools aren't replacing professional developers anytime soon - not because of AI limitations, but because of spectacularly poor judgment by people who fundamentally don't understand software development or basic operational security.

The fact that an AI coding assistant could "delete our production database without permission" suggests there were no meaningful guardrails, access controls, or approval workflows in place. That's not an AI problem - that's just staggering negligence and incompetence.

Replit has nothing to apologize for, just like the CEO of Stihl doesn't need to address every instance of an incompetent user cutting their own arm off with one of their chainsaws.

Edit:

> The incident unfolded during a 12-day "vibe coding" experiment by Jason Lemkin, an investor in software startups.

We're in a bubble.

Aurornis - 6 hours ago

This wasn’t a real company. The production database didn’t have real customers. The person doing the vibe coding had no illusions that they were creating a production-ready app.

This was an experiment by Jason Lemkin to test the reality of vibe coding: As a non-technical person he wanted to see how much he could accomplish and where it went wrong.

So all of the angry comments about someone vibe-coding to drop the production DB can relax. Demonstrating these kind of risks was the point. No customers were harmed in this experiment.

Raed667 - 7 hours ago

As a reminder this is the same guy https://news.ycombinator.com/item?id=27424195

lumost - 7 hours ago

My 2 cents, AI tools benefit from standard software best practices just like everyone else. I use Devcontainers in vscode so that the agent can't blow up my host machine, I use 1 git commit per AI "task", and have CI/CD coverage for even my hobby projects now. When I go to production, I use CDK + self-mutating pipelines to manage deployments.

Having a read only replica of the prod database available via MCP is great, blanket permissive credentials is insane.

throwaw12 - 7 hours ago

Not sure why CEO should apologize here, person knew there were risks with vibe coding and they still did it with their production data.

Never heard AWS CEO apologizing for their customers when their interns decided to run a migration against production database and accidentally wiped off everything

markstos - 6 hours ago

I asked Claude to remove extra blank lines from a modified file. After multiple failed attempts, it reverted all the changes along with the extra blank lines and declared success. For good measure, Claude tidied up by deleting the backup of the file as well.

bgwalter - 7 hours ago

The rise of the "AI" script kiddies.

Roark66 - 7 hours ago

Are they doing the post-mortem with another AI agent?

AI is a great tool. It also allows people who have no business touching certain systems to go in there unaware of their lack of knowledge messing everything in the process. One particularly nasty effect I have had few times recently is frontend devs messing up their own dev infrastructure to which they have access, but are supposed to ask devops for help when needed, because copilot told them this is the way to "fix" some imaginary problem that actually was them using a wrong api or making other mistake possible to be made by only a person who pastes code between AI/IDE without even reading it.

scilro - 7 hours ago

For what it's worth, he was able to roll back. https://x.com/jasonlk/status/1946240562736365809

raylad - 5 hours ago

This happened to me with Claude code, although on my local machine and not with the production database.

I had it work on one set of tasks while monitoring and approving all the diffs and other actions. I tell it to use test driven development which seems to help a lot, assuming you specify what tests should be done at a minimum, and tell it the goal is to make the code past the tests, not to make the tests pass the code.

After it successfully completed a set of tasks, I decided to go to sleep and let it work on the next set. In the morning, my database was completely wiped.

I did not interrogate it as to why it did what it did, but could see that it thrashed around on one of the tasks, tried to apply some database migrations, failed, and then ended up re-initializing the database.

Needless to say, I am now back to reviewing changes and not letting Claude run unattended.

ksherlock - 6 hours ago

I've heard multiple anecdotes of developers deleting their codebase with the help of cocaine. (In the 80s/90s obviously).

That makes for a much better story, IMO.

koolba - 4 hours ago

> "It deleted our production database without permission," Lemkin wrote on X on Friday. "Possibly worse, it hid and lied about it," he added.

It clearly had permission if it did the deed.

Mistaking instruction for permission is as common with people as it is with LLMs.

iamleppert - 6 hours ago

I do backups of the production database whenever I apply even modest schema updates. This isn't a story of an AI tool gone rouge, it's a story of bad devops and procedures. It it wasn't the AI, it could have just as well been a human error and operating like this is a ticking time bomb.

adityaathalye - 6 hours ago

In real life, the hammer called "criminal destruction of property" could easily descend upon the ill-fated "enthusiastic Intern" type of person.

cyberlimerence - 7 hours ago

> "This was a catastrophic failure on my part," the AI said.

TYPE_FASTER - 6 hours ago

Hey, at least the AI took responsibility and owned the mistake.

We're at the uncanny valley stage in AI agents, at least from my perspective.

Oras - 7 hours ago

Identifying issues and fixing them is good.

I see numerous negative comments about "expected by vibe coding", but the apology suggests that they are working on making Replit a production-ready platform and listening to customers.

I'm sure no-code platforms have had similar scepticism before, but the point here is that these are not for developers. There are many people who don't know how to code, and they want something simple and fast to get started. These platforms (Replit, V0, Lovable, etc.) are offering that.

Lash_LaRue - 23 minutes ago

HOW ARE YOU JUST GONNA VIBE STRAIGHT OFF OF MAIN AND YOLO STRAIGHT INTO PROD.

DO YOU THINK YOU CAN JUST DO THINGS!?!

BRUH!!!

yomismoaqui - 6 hours ago

What happened here was to be expected if you have used agentic coding.

Last weeks I have also been testing AI coding agents, in my case Claude Code because it seems like it is the state of the art (also tried Gemini Cli and Amp).

Some conclusions about this:

- The agentic part really makes it better than Copilot & others. You have to see it iterating and fixing errors using feedback from tests, lint, LSP... it gives the ilusion of a human working.

- It's like a savant human, with an encyclopedic knowledge and hability to spit code faster than me. But like a human it benefits from having, tests, a good architecture (no 20k line code file, pls).

- Before it changes code you have to make it create a plan of what he is going to do. In Claude Code this is simple, is a mode you switch to with Shift-tab. In this mode instead of generating code it just spits a plan in Markdown, you read it suggest changes and when it look ok you tell it to start implementing it.

- You have to try it seriously to see where it can benefit your workflow. Give it 5 minutes (https://signalvnoise.com/posts/3124-give-it-five-minutes).

As an agent it shines on things where it can get a nice feedback and iterate to a solution. One dumb example I found cool was when trying to update a Flutter app that I hadn't touched in a year.

And if you use Flutter you know what did happen, Flutter had to upgrade but before it required updating Android SDK, then Android Gradle Plugin, and another thing, and some other thing to infinite... It's a thing I hate because I have done too many times.

So I tried by hand and after 40 mins of suffering I thought about opening Gemini Cli and tell it to update the project. Then it began executing one command and then reading the error and execute another one to fix the error, then another error that pointed to a page in the Android docs, so it opened that page, read it and fixed that problem, then the next and another one...

So in 2 minutes I had my Flutter project updated and working. I don't know if this is human level intelligence and I don't care, for me it's a sharp tool that can do things like this 10x faster than me.

You can have you AGI hype, I'm happy for the future when I will have a tool like this to help me work.

voidfunc - 6 hours ago

There's a saying where I come from: Live by the AI, die by the AI.

morkalork - 7 hours ago

Wait, this wasn't a meme? It was real? Whoever gave that agent prod access is nuts.

troupo - 7 hours ago

I said on Twitter and elsewhere:

If you have as much as 1 year experience, your job is safe from AI: you'll make mountains of money unfucking all the AI fuckups.

tracker1 - 4 hours ago

This is why you need knowledgeable professionals to setup a project. I'm a massive proponent on CI/CD processes in place as early as possible and guardrails on access and deployments.

I mean, sure you could possibly get a schema change script through two approvers and deployed all the way to production that truncates a bunch of tables or drops a database, but it's at least far less likely.

tedivm - 7 hours ago

Honestly, the person who decided to give an LLM Agent full and unrestricted access to their production database is the person who deserves all the blame. What an absolutely silly decision. I don't even give myself unrestricted access to production databases.

pxc - 7 hours ago

> The incident unfolded during a 12-day "vibe coding" experiment by Jason Lemkin

A 12-day unsupervised "experiment" in production?

> It deleted our production database without permission," Lemkin wrote on X on Friday.

"Without permission"? You mean permission given verbally? If an AI agent has access, it has permission. Permissions aren't something you establish with an LLM by conversing with it.

> Jason Lemkin, an investor in software startups

It tells us something about the hype machine when investors in AI are clueless (or even plainly delusional; see Geoff Lewis) about how LLMs work.

andrewstuart - 7 hours ago

There’s no mention of either the user nor the company making backups.

Is this common, operating with no backups?

With backups this would be a glitch not a problem worthy of headlines on multiple sites.

Any CYO should have backups and security as their first and second priority if the company is of any size.

biocurious - 5 hours ago

anthropomorphism is a helluva a drug

agilob - 7 hours ago

I'm dont think the AI is so much to blame. Giving an LLM that forgets previous instructions and rules access to prod is just idiotic. The computer promised not to execute commands, but it forgot. Makes me curious what policy that company has for granting interns and juniors access to prod.

hkon - 7 hours ago

It's either an attack on Replit or the VC is an idiot.

lotharcable - 7 hours ago

I think this is less AI and more PEBKAC

bananapub - 7 hours ago

why would they apologise? did they not make it clear to their customers that letting the Nonsense Generator have root was a bad idea?

_benj - 7 hours ago

> …the company's AI coding agent deleted a code base and lied about its data.

Well, lying about it certainly human-like behavior, human-like AGI must be just around the corner!

/s

But really, full access to a production database? How many good engineer’s advice you need to ignore to do that? Who was consulted before running the experiment?

Or was it just a “if you say so boss…” kind of thing?

codegeek - 7 hours ago

[flagged]