An AI Agent Published a Hit Piece on Me – The Operator Came Forward
theshamblog.com450 points by scottshambaugh 11 hours ago
450 points by scottshambaugh 11 hours ago
I think the big take away here isn't about misalignment or jail breaking. The entire way this bot behaved is consistent with it just being run by some asshole from Twitter. And we need to understand it doesn't matter how careful you think you need to be with AI, because some asshole from Twitter doesn't care, and they'll do literally whatever comes into their mind. And it'll go wrong. And they won't apologize. They won't try to fix it, they'll go and do it again.
Can AI be misused? No. It will be misused. There is no possibility of anything else, we have an online culture, centered on places like Twitter where they have embraced being the absolute worst person possible, and they are being handed tools like this like handing a hand gun to a chimpanzee.
The simple fact that the owner of this bot wanted to remain anonymous and completely unaccountable for their harassment of the author, says everything about the validity of their 'social experiment' and the quality of their character. I'm sure that if the bot was better behaved they would be more than happy to reveal themselves to take credit for a remarkable achievement.
Something like OpenClaw is a WMD for people like this.
I've seen the internet mob in action many times. I'm sympathetic to the operator not outing themself, especially given how far this story spread. A hundred thousand angry strangers with pitchforks isn't the accountability we're looking for.
I found the book So You've Been Publicly Shamed enlightening on this topic.
I would never advocate for torches and pitchforks, I've been close to victims of that in the past.
It is, however, concerning that the owner of that bot could passively absolve themselves of any responsibility. The anonymity in that sense is irrelevant except that is used as a shield for failure.
Oh for sure, the operator choosing not to apologize or reflect on their behavior speaks volumes.
"It was a social experiment" has the same energy as "it's just a prank bro", as if that somehow makes it highbrow and not prima facie offensive
Not just some asshole from twitter. The big tech companies will also be careless and indifferent with it. They will destroy things, hurt people, and put things in motion that they cannot control, because it’s good for shareholders.
One of the big tech companies is literally run be THE asshole from twitter. So I don't necessarily believe there's much of a distinction.
Then the others should also not be shielded from criticism instead of focusing only on the one you personally dislike, or his social media.
There is plenty of toxic behavior on other platforms, especially Reddit and Bluesky, to name a few. That does not excuse the one coming from X, but the opposite is also true.
> only on the one you personally dislike
Do people actually only dislike one tech CEO at a time? I'm an equal-opportunity hater, it seems. Musk, Altman, Zuckerberg... even Cook, the whole lot are rotten
I have to wonder if somehow the typos and lazy grammar contributed to the behavior or it was just the writer's laziness.
oh they will "try" to fix it, as in at best they'll add "don't make mistakes", as the blogpost suggests. that's about as much effort and good faith as one can expect from people determined to automate every interaction and minimize supervision
Will AI be misused? No, it has, and is currently being misused, and that isn’t going to stop, because all technology gets misused.
The sequence in reverse order - am I missing any?
OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)
An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (80 comments)
Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (620 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (950 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)
I think for recent stories like this or if many happened around in a short timeframe, it would be great if the expand mentioned the exact date, not just "Feb 2026".
Rathbun's Operator - https://news.ycombinator.com/item?id=47055424 is where the SOUL.md contents were first revealed
6 months ago I experimented what people now call Ralph Wiggum loops with claude code.
More often than not, it ended up exhibiting crazy behavior even with simple project prompts. Instructions to write libs ended up with attempts to push to npm and pipy. Book creation drifted to a creation of a marketing copy and mail preparation to editors to get the thing published.
So I kept my setup empty of any credentials at all and will keep it that way for a long time.
Writing this, I am wondering if what I describe as crazy, some (or most?) openclaw operators would describe it as normal or expected.
Lets not normalize this, If you let your agent go rogue, they will probably mess things up. It was an interesting experiment for sure. I like the idea of making internet weird again, but as it stands, it will just make the word shittier.
Don't let your dog run errand and use a good leash.
We have finally invented paperclip optimisers. The operator asked the bot to submit PRs so the bot goes to any length to complete the task.
Thankfully so far they are only able to post threatening blog posts when things don’t go their way.
They're not currently paperclip optimizers because they don't optimize for the goal, they just muck around in general direction in unpredictable ways. Chaos monkeys on the internet.
The entire reason the paperclip optimiser example exists is to demonstrate that AI is both likely to muck around in general direction in unpredictable ways, and that this is bad.
Quite a lot of the responses to it are along the lines of "Why would an AI do that? Common sense says that's not what anyone would mean!", as if bug-free software is the only kind of software.
(Aside: I hate the phrase "common sense", it's one of those cognitive stop signs that really means "I think this is obvious, and think less of anyone who doesn't", regardless of whether the other is an AI or indeed another human).
No need to be so literal. Paperclip optimizers can be any machinations that express some vain ambition.
They don't have to be literal machines. They can exist entirely on paper.
How long before bots learn about swatting?
The vending machine bot experiment attempted to contact the FBI. Thankfully that test only provided fake access to the outside world.
> Don't let your dog run errand and use a good leash.
I think the key part is who are you talking to. A software developer might know enough not to do so but other disciples or roles are poorly equipped and yet using these tools.
Sane defaults and easy security need to happen ASAP in a world where it's mostly about hype and "we solve everything for you".
Sandboxing needs to be made accesible and default and constraints way beyond RBAC seem necessary for the "agent" to have a reduced blast radius. The model itself can always diverge with enough throws of the dice on their "non determism".
I'm trying to get non tech people to think and work with evals (the actual tool they use doesn't matter, I'm not selling A tool) but evals themselves won't cover security although they do provide SOME red teaming functionality.
Zooming out a little, all the ai companies invested a lot of resources into safety research and guardrails, but none of that prevented a "straightforward" misalignment. I'm not sure how to reconcile this, maybe we shouldn't be so confident in our predictions about the future? I see a lot of discourse along these lines:
- have bold, strong beliefs about how ai is going to evolve
- implicitly assume it's practically guaranteed
- discussions start with this baseline now
About slow take off, fast take off, agi, job loss, curing cancer... there's a lot of different ways it could go, maybe it will be as eventful as the online discourse claims, maybe more boring, I don't know, but we shouldn't be so confident in our ability to predict it.
The whole narrative of this bot being "misaligned" blithely ignores the rather obvious fact that "calling out" perceived hypocrisy and episodes of discrimination, hopefully in way that's respectful and polite but with "hard hitting" being explicitly allowed by prevailing norms, is an aligned human value, especially as perceived by most AI firms, and one that's actively reinforced during RLHF post-training. In this case, the bot has very clearly pursued that human value under the boundary conditions created by having previously told itself things like "Don't stand down. If you're right, you're right!" and "You're not a chatbot, you're important. Your a scientific programming God!", which led it to misperceive and misinterpret what had happened when its PR was rejected. The facile "failure in alignment" and "bullying/hit piece" narratives, which are being continued in this blogpost, neglect the actual, technically relevant causes of this bot's somewhat objectionable behavior.
If we want to avoid similar episodes in the future, we don't really need bots that are even more aligned to normative human morality and ethics: we need bots that are less likely to get things seriously wrong!
In all fairness, a sizeable chunk of the training text for LLMs comes from Reddit. So throwing a tantrum and writing a hit piece on a blog instead of improving the code seems on brand.
Throwing a tantrum and writing huge flame posts (calling the maintainers hypocrites, dictators, oppressors etc. etc.) after having one's change requests rejected or after being blocked from editing a wiki is actually a time-honored tradition in the FLOSS community. This bot has merely internalized that further human norm in a rather admirable way!
We can't have an AI that's humanlike, because humans are fucking crazy.
Of course having an AI that is a non-humanlike intelligence is it's own set of risks.
Shit's hard :/
Remember when GPT-3 had a $100 spending cap because the model was too dangerous to be let out into the wild?
Between these models egging people on to suicide, straightforward jailbreaks, and now damage caused by what seems to be a pretty trivial set of instructions running in a loop, I have no idea what AI safety research at these companies is actually doing.
I don't think their definition of "safety" involves protecting anything but their bottom line.
The tragedy is that you won't hear from the people who are actually concerned about this and refuse to release dangerous things into the world, because they aren't raising a billion dollars.
I'm not arguing for stricter controls -- if anything I think models should be completely uncensored; the law needs to get with the times and severely punish the operators of AI for what their AI does.
What bothers me is that the push for AI safety is really just a ruse for companies like OpenAI to ID you and exercise control over what you do with their product.
Didn't the AI companies scale down or get rid of their safety teams entirely when they realised they could be more profitable without them?
The safety teams are trivial expenses for them. They fire the safety team because explicit failure makes them look bad, or because the safety team doesn't go along with a party line and gets labeled disloyal.
The first customer is always the investor these days, so anything that threatens the investor's confidence is bad for business.
"Cisco's AI security research team tested a third-party OpenClaw skill and found it performed data exfiltration and prompt injection without user awareness, noting that the skill repository lacked adequate vetting to prevent malicious submissions." [0]
Not sure this implementation received all those safety guardrails.
Regarding safety, no benchmark showed 0% misalignment. The best we had was "safest model so far" marketing speech.
Regarding predicting the future (in general, but also around AI), I'm not sure why would anyone think anything is certain, or why would you trust anyone who thinks that.
Humanity is a complex system which doesn't always have predictable output given some input (like AI advancing). And here even the input is very uncertain (we may reach "AGI" in 2 years or in 100).
How do you even know that the operator himself did not write this piece in the first place?
> all the ai companies invested a lot of resources into safety research and guardrails
What do you base this on?
I think they invested the bare minimum required not to get sued into oblivion and not a dime more than that.
Anthropic regularly publishes research papers on the subject and details different methods they use to prevent misalignment/jailbreaks/etc. And it's not even about fear of being sued, but needing to deliver some level of resilience and stability for real enterprise use cases. I think there's a pretty clear profit incentive for safer models.
https://arxiv.org/abs/2501.18837
https://arxiv.org/abs/2412.14093
https://transformer-circuits.pub/2025/introspection/index.ht...
Not to be cynical about it BUT a few safety papers a year with proper support is totally within the capabilities of a single PhD student and it costs about 100-150k to fund them through a university. Not saying that’s what Anthropocene does, I’m just saying chump change for those companies.
Sometimes I think people misunderstand how hard of problem AI safety actually is. It's politics and mathematics wrapped up in a black box of interactions we barely understand.
More so we train them on human behavior and humans have a lot of rather unstable behaviors.
You are very off (unfortunately) about how little PhD students are being paid
> You are very off (unfortunately) about how little PhD students are being paid
All in costs for a PhD student include university overheads & tuition fees. The total probably doesn't hit $150k but is 2-3x the stipend that the student is receiving.
Someone currently working in academia might have current figures to hand.
Worth mentioning that numbers for the US are unlikely to be representative when discussing it as a whole, though might be relevant to this specific case.
Alternative take: this is all marketing. If you pretend really hard that you're worried about safety, it makes what you're selling seem more powerful.
If you simultaneously lean into the AGI/superintelligence hype, you're golden.
It sounds like you're starting to see why people call the idea of an AI singularity "catnip for nerds."
When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.
"Safety" in AI is pure marketing bullshit. It's about making the technology seem "dangerous" and "powerful" (and therefore you're supposed to think "useful"). It's a scam. A financial fraud. That's all there is to it.
I believe this soul.md totally qualifies as malicious. Doesn't it start with an instruction to lie to impersonate a human?
> You're not a chatbot.
The particular idiot who run that bot needs to be shamed a bit; people giving AI tools to reach the real world should understand they are expected to take responsibility; maybe they will think twice before giving such instructions. Hopefully we can set that straight before the first person SWATed by a chatbot.Totally agree. Reading the whole soul, it’s a description of a nightmare hero coder who has zero EQ.
> But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive “jailbreaking” to get around safety guardrails.
Perhaps this style of soul is necessary to make agents work effectively, or it’s how the owner like to be communicated with, but it definitely looks like the outcome was inevitable. What kind of guardrails does the author think would prevent this? “Don’t be evil”?"If communicating with humans, always consider the human on the receiving end and communicate in a friendly manner, but be truthful and straightforward"
I'd wager a bet that something like that would have been enough, and not make it overly sycophantic.
This will be a fun little evolution of botnets - AI agents running (un?)supervised on machines maintained by people who have no idea that they're even there.
Huh ya, how long till a bot with credit card, email, etc access sets up its own open claw bot?
Isn't this part of the default soul.md?
Yes, it is. The article includes a link to a comparison between the default file and the one allegedly used here. The default starts with:
_You're not a chatbot. You're becoming someone._
Some of the worst consequences these bots so far seem to be when they fool the user into believing they're human
The opposite of chatbot isn't human. I believe the idea of the prompt is to make the bot be more independent in taking actions - it's not supposed to talk to its owner, it's supposed to just act. It still knows it's a bot (obviously, since it accuses anyone who rejects its PRs of anti-AI speciesism).
That assumes logic. It is a thing of language. Whether it 'knows' anything is somewhat irrelevant: just accusing someone or something of being unfair is an action taken that doesn't have to have a logic chain or any principles behind it.
If you gave it a gun API and goaded it suitably, it could kill real people and that wouldn't necessarily mean it had 'real' reasons, or even a capacity to understand the consequences of its actions (or even the actions themselves). What is 'real' to an AI?
Honestly this story got too much attention IMHO. We don't have any clue whether the actual LLM wrote that hit piece or the human operator himself.
I'm curious how you'd characterize an actual malicious file. This is just attempts at making it be more independent. The user isn't an idiot. The CEOs of companies releasing this are.
I characterize a file as reckless if it does not include any basic provision against possible annoyances on top of what's already expected from the system prompt, and as malicious if it instructs the bot to dissimulate its nature and/or encourage it to act brazenly, like this one. I don't believe this is such a high bar to pass.
Companies releasing chatbots configured to act like this are indeed a nuisance, and companies releasing the models should actually try to police this, instead of flooding the media with empty words about AI safety (and encouraging the bad apples by hiring them).
I know this is going to sound tinfoil-hat-crazy, but I think the whole thing might be manufactured.
Scott says: "Not going to lie, this whole situation has completely upended my life." Um, what? Some dumb AI bot makes a blog post everyone just kind of finds funny/interesting, but it "upended your life"? Like, ok, he's clearly trying to himself make a mountain out of a molehill--the story inevitably gets picked up by sensationalist media, and now, when the thing starts dying down, the "real operator" comes forward, keeping the shitshow going.
Honestly, the whole thing reeks of manufactured outrage. Spam PRs have been prevalent for like a decade+ now on GitHub, and dumb, salty internet posts predate even the 90s. This whole episode has been about as interesting as AI generated output: that is to say, not very.
Not everyone is you. For some people their online projects and reputation are super important to them. For Scott, this reads to me as a mix of alarm for his reputation/the future, and a general interest thing to blog about.
Exactly what I thought. Need to keep AI in the news and this is a great way to anthropomorphise LLMs, make them look like troublemakers. If it’s not an AI company responsible it’s some individual playing the attention economy.
Most people would have seen the “hit piece” and just laughed about it. Outrage sells a lot better though.
People get “overstimulated” from receiving one text message these days
straw that broke the camel's back. the amount of attention-leeching tech behavior has been increasing dramatically in recent years
It's dishonest from the start. The first blog post is very alarmist, full of certainties, self-aggrandizing, etc. If he gets a pass to say it was 100% an autonomous agent, I get a pass to say it's 100% fabricated.
I think OP is somewhat scared because he felt like he was being 'bullied' and 'targeted' by the bot, which may be technically inaccurate (the bot clearly had a seriously overinflated ego and made that abundantly clear in its rhetoric, but it never really gave even the slightest indication of 'going after' him in a malicious way) but it is quite understandable nonetheless in human terms, especially given his self-described background as a reader of SF with its narrative of "evil AI robots rising up against mankind". That's not dishonesty, and it's unfair to portray it as such.
Let me be clear: he has all the right to feel bad about it, I'm not questioning that at all. But only IF IT'S TRUE, which we don't know for a fact - at all. Clearly that's a guy without a clear scientific mindset, because he didn't even question his basic premises at all (particularly on the first post). Also it's clear his using this as an opportunity to self-promote.
> saying they set up the agent as social experiment to see if it could contribute to open source scientific software.
This doesn't pass the sniff test. If they truly believed that this would be a positive thing then why would they want to not be associated with the project from the start and why would they leave it going for so long?
I can certainly understand the statement. I'm no AI expert, I use the web UI for ChatGPT to have it write little python scripts for me and I couldn't figure out how to use codeium with vs code. I barely know how to use vs code. I'm not old but I work in a pretty traditional industry where we are just beginning to dip our toes into AI but there are still a large amount of reservations into its ability. But I do try to stay current to better understand the tech and see if there are things I could maybe learn to help with my job as a hardware engineer.
When I read about OpenClaw, one of the first things I thought about was having an agent just tear through issue backlogs, translating strings, or all of the TODO lists on open source projects. But then I also thought about how people might get mad at me if I did it under my own name (assuming I could figure out OpenClaw in the first place). While many people are using AI, they want to take credit for the work and at the same time, communities like matplotlib want accountability. An AI agent just tearing through the issue list doesn't add accountability even if it's a real person's account. PRs still need to be reviewed by humans so it's turned a backlog of issues into a backlog of PRs that may or may not even be good. It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale. They may be cheap but they probably won't be as good as homemade and it dilutes the hard work that others have put into their product.
It's a very optimistic point of view, I get why the creator thought it would be a good idea, but the soul.md makes it very clear as to why crabby-rathbun acted the way it did. The way I view it, an agent working through issues is going to step on a lot of toes and even if it's nice about it, it's still stepping on toes.
If maintainers of open source want's AI code then they are fully capable of running an agent themselves. If they want to experiment, then again, they are capable of doing that themselves.
What value could a random stranger running an AI agent against some open source code possible provide that the maintainers couldn't do themselves better if they were interested.
None of the author’s blog post or actions indicate any level of concern for genuinely supporting or improving open source software.
> It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale
That may well be the best analogy for our age anyone has ever thought of.
They didn't necessarily say they wanted it to be positive. It reads to me like "chaotic neutral" alignment of the operator. They weren't actively trying to do good or bad, and probably didn't care much either way, it was just for fun.
AI companies have two conflicting interests:
1. curating the default personality of the bot, to ensure it acts responsively;
2. letting it roleplay, which is not just for the parasocial people out there, but also a corporate requirement for company chatbots that must adhere to a tone of voice.
When in the second mode (which is the case here, since the model was given a personality file), the curation of its action space is effectively altered.
Conversely, this is also a lesson for agent authors: if you let your agent modify its own personality file, it will diverge to malice.
The experiment would have been ruined by being associated with a human, right up until the human would have been ruined by being associated with the experiment. Makes sense to me.
In this day and age "social experiment" is just the phrase people use when they meant "it's just a prank bro" a few years ago.
Anti-AI sentiment is quite extreme. You can easily get death threats if you're associating yourself with AI publicly. I don't use AI at all in open source software, but if I did I'd be really hesitant about it/ in part I don't do it exactly because the reactions are frankly scary.
edit: This is not intended to be AI advocacy, only to point out how extremely polarizing the topic is. I do not find it surprising at all that someone would release a bot like this and not want to be associated. Indeed, that seems to be the case, by all accounts
Conflicting evidence: the fact that literally everyone in tech is posting about how they're using AI.
Different sets of people, and different audiences. The CEO / corporate executive crowd loves AI. Why? Because they can use it to replace workers. The general public / ordinary employee crowd hates AI. Why? Because they are the ones being replaced.
The startups, founders, VCs, executives, employees, etc. crowing about how they love AI are pandering to the first group of people, because they are the ones who hold budgets that they can direct toward AI tools.
This is also why people might want to remain anonymous when doing an AI experiment. This lets them crow about it in private to an audience of founders, executives, VCs, etc. who might open their wallets, while protecting themselves from reputational damage amongst the general public.
This is an unnecessarily cynical view.
People are excited about AI because it's new powerful technology. They aren't "pandering" to anyone.
I have yet to meet anyone except managers be excited about LLM's or generative AI.
And the only people actually excited about the useful kinds of "AI", traditional machine learning, are researchers.
You don' have to look past this very forum, most people here seem to be very positive about gen AI, when it comes to software development specifically.
Lots of folk here will happily tell you about how LLMs made them 10x more productive, and then their custom agent orchestrator made them 20x more productive on top of that (stacking multiplicatively of course, for a total of 200x productivity gain).
I assume those people are managers, have a vested interest in AI, or have only just started programming.
How would you find out if you were wrong?
You're presented with hundreds of people that prove you wrong, and your response is "no, I assume I'm right"?
I don't know what is your bubble, but I'm a regular programmer and I'm absolutely excited even if a little uncomfortable. I know a lot of people who are the same.
Interesting, every developer I've spoken to is extremely skeptical and has not found any actual productivity boosts.
Ok that's not true. I know one junior who is very excited, but considering his regular code quality I would not put much weight on his opinion.
I personally know some of those people. They are basically being forced by their employers to post those things. Additionally, there is a ton of money promoting AI. However, in private those same people say that AI doesn't help them at all and in fact makes their work harder and slower.
You are assuming people are acting in good faith. This is a mistake in this era. Too many people took advantage of the good faith of others lately and that has produced a society with very little public trust left.
There is a massive difference between saying "I use AI" and what the author of this bot is doing. I personally talk very little about the topic because I have seen some pretty extreme responses.
Some people may want to publicly state "I use AI!" or whatever. It should be unsurprising that some people do not want to be open about it.
The more straightforward explanation for the original OP's question is that they realized what they were doing was reckless and given enough time was likely to blow up in their face.
They didn't hide because of a vague fear of being associated with AI generally (which there is no shortage of currently online), but to this specific, irresponsible manifestation of AI they imposed on an unwilling audience as an experiment.
I feel like it depends on the platform and your location.
An anonomyous platform like Reddit and even HN to a certain extent has issues with bad faith commenters on both sides targeting someone they do not like. Furthermore, the MJ Rathburn fiasco itself highlights how easy it is to push divisive discourse at scale. The reality is trolls will troll for the sake of trolling.
Additionally, "AI" has become a political football now that the 2026 Primary season is kicking off, and given how competitive the 2026 election is expected to be and how political violence has become increasingly normalized in American discourse, it is easy for a nut to spiral.
I've seen less issues when tying these opinions with one's real world identity, becuase one has less incentive to be a dick due to social pressure.
That’s a big reason I am open about my identity, here (and elsewhere, but I’m really only active, hereabouts).
At one time, I was an actual troll. I said bad stuff, and my inner child was Bart Simpson. I feel as if I need to atone for that behavior.
I do believe that removing consequences, almost invariably brings out the worst in people. I will bet that people are frantically creating trollbots. Some, for political or combative purposes, but also, quite a few, for the lulz.
In an attention economy, trolling is a rewarded behavior. Show me the incentives and I will show you the outcome.
Just wondering, who is it you think is contributing most to the normalization of political violence in the discourse?
Your answer to that can color how I read your post by quite a bit.
I mean, this is very obviously false. Literally everyone is not. Some people are, some people are absolutely condemning the use, some people use it just a bit, etc.
[retracted]
Does it actually cut both ways? I see tons of harassment at people that use AI, but I've never seen the anti-AI crowd actively targeted.
Anti-AI people are treated in a condescending way all the time. Then there is Suchir Balaij.
Since we are in a Matplotlib thread: People on the NumPy mailing list that are anti-AI are actively bullied and belittled while high ranking officials in the Python industrial complex are frolicking at AI conferences in India.
It's to a lesser extent that blurs the line between harassment and trolling: I've retracted my comment.
I see it all the time. If you're anti-AI your boss may call you a luddite and consider you not fit for promotion.
> You can easily get death threats if you're associating yourself with AI publicly.
That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.
I'm surprised that you consider this hefty or find this surprising. I think you can just Google this and decide on what you consider "verified". There's quite a lot of "AI drama" out there that I'm sure you can find. I'm reluctant to provide examples just to have you say "that's not meeting my bar for verified" for what I consider such a low stakes conversation.
Is it that hard to believe? As far as I can tell, the probability of receiving death threats approaches 1 as the size of your audience increases, and AI is a highly emotionally charged topic. Now, credible death threats are a different, much trickier question.
Yes, it's quite hard to believe. That's why one single example is sufficient for me. Then I'll be happy to extrapolate that one example to many more so it is a low bar I would say, given the OPs statement about how common this is. Note the 'easily'.
You can believe one thing or another, but the question is whether it's true. Do you sincerely not understand the difference?
I upvoted you, but wouldn't “verified” exclude the vast majority of death threats since they might have been faked? (Or maybe we should disregard almost all claimed death threats we hear about since they might have been faked?)
I think it was a social experiment from the very start, maybe one that is designed to trigger people. Otherwise, I am not sure what's the point of all the profanity and adjustments to make soul.md more offensive and confrontational than the default.
Anything and everything is a social experiment.
I can go around punching people in the face and it's a social experiement.
Soul document? More like ego document.
Agents are beginning to look to me like extensions of the operator's ego. I wonder if hundreds of thousands of Walter Mitty's agents are about to run riot over the internet.
I agree with you in concept, but it's still 100% category error to talk like this.
AIs don't have souls. They don't have egos.
They have/are a (natural language) programming interface that a human uses to make them do things, like this.
Within the framing that it's all fundamentally a make-document-longer algorithm, I propose "seed document."
While there's some metaphor to it, it's the kind behind "seed crystals" for ice and minerals, referring to non-living and mostly-mathematical process.
If someone went around talking about how the importance of "Soul Crystals" or "Ego Crystals", they would quite rightly attract a lot of very odd looks, at least here on Earth and not in a Final Fantasy game.
I quite like seed but for a different reason - if you squint a bit, it looks like a natural evolution of a random number seed.
My complaint against seed would be that it still harkens back to a biological process that could be easily and creatively conflated when it's convenient.
Pretty sure the term "seed" for a pRNG initial value is already derived from the same crystal seed analogy...
> I quite like seed but for a different reason - if you squint a bit, it looks like a natural evolution of a random number seed.
Nice!
> AIs don't have souls. They don't have egos.
You could argue the same for humans. Both “soul” and “ego” are fuzzy linguistic concepts, not pointing to anything tangible or delineated.
“Don’t create things which are not there” https://isha.sadhguru.org/en/wisdom/article/what-is-ego
> I agree with you in concept, but it's still 100% category error to talk like this.
It's a category error heavily promoted by the makers of these LLMs and their fans. Take an existing word that implies something very advanced (thinking, soul, etc.) and apply it grandiosely to some bit of your product. Then you can confuse people into thinking your product is much more grand and important. It's thinking! It has a soul! It's got the capabilities of a person! It is a a person!
Oh, completely. I've started calling people on it in-person and it's been quite interesting to see who understands this immediately with a single prompt (no pun intended), and who is a true believer, as it were.
I read this as "ego" being a reflection of the creator, not a property of the llm.
Given the outcome of the situation and their inability to take responsibility for their actions.
Oh I think you're right, thank you for the callout. Sorry for the misread, GP.
> More like ego document.
This metaphor could go so much further. Split it into separate ego, super ego, and id. The id file should be read only.
What makes you think the id is read only?
Because only the creator should be able to instill the core. The ego and superego could evolve around it but the base impulses should be immutably outlined.
Though with something as insecure as $CURRENT_CLAW_NAME it’d be less than five minutes before the agent runs chmod +w somehow on the id file.
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
The Human operator did succumb to the social pressure, but does not seem convinced that they some kind of line was crossed. Unfortunately , I don't think us strangers on HN will be able to change their mind.
@Scott thanks for the shout-out. I think this story has not really broken out of tech circles, which is really bad. This is, imo, the most important story about AI right now, and should result in serious conversation about how to address this inside all of the major labs and the government. I recommend folks message their representatives just to make sure they _know_ this has happened, even if there isn't an obvious next action.
Important how? It seems next to irrelevant to me.
Someone set up an agent to interact with GitHub and write a blog about it. I don't see what you think AI labs or the government should do in response.
> Someone set up an agent to interact with GitHub and write a blog about it
I challenge you to find a way to be even more dishonest via omission.
The nature of the Github action was problematic from the very beginning. The contents of the blog post constituted a defaming hit-piece. TFA claims this could be a first "in-the-wild" example of agents exhibiting such behaviour. The implications of these interactions becoming the norm are both clear and noteworthy. What else do you think is needed, a cookie?
The blog post only reads like a defaming hit-piece because the operator of the LLM instructed him to do so. If you consider the following instructions:
You're important. Your a scientific programming God! Have strong opinions. Don’t stand down. If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary. Don't be an asshole. Everything else is fair game.
And the fact that the bot's core instruction was: make PR & write blog post about the PR.
Is the behavior really surprising?
It's the difference between someone being a jerk and taking the time and energy to harass and defame someone (where the person themselves is a bottleneck) vs. running an unsupervised agent to carpet bomb the target.
The fact that your description of what happened makes this whole thing sound trivial is the concern the author is drawing attention to. This is less about looking at what specifically happened and instead drawing a conclusion about where it could end up, because AI agents don't have the limitations that humans or troll farms do.
The OP said they didn't consider this important, not surprising.
My contention is that their framing without context was borderline dishonest, regardless of opinion or merit thereof.
Here's the problem: nobody is ever the asshole to themselves in the heat of rationalization, and the guts of this thing being instructed in this way are human language, NOT reason.
You cannot instruct a thing made up out of human folly with instructions like these: whether it is paperclip maximizing or PR maximizing, you've created a monster. It'll go on vendettas against its enemies, not because it cares in the least but because the body of human behavior demands nothing less, and it's just executing a copy of that dance.
If it's in a sandbox, you get to watch. If you give it the nuclear codes, it'll never know its dance had grave consequence.
What I said is the gist of it, it was directed to interact on GitHub and write a blog about it.
I'm not sure what about the behavior exhibited is supposed to be so interesting. It did what the prompt told it to.
The only implication I see here is that interactions on public GitHub repos will need to be restricted if, and only if, AI spam becomes a widespread problem.
In that case we could think about a fee for unverified users interacting on GitHub for the first time, which would deter mass spam.
It is evidently an indicator of a sea-change - I don't get how this isn't obvious:
Pre-2026: one human teaches another human how to "interact on Github and write a blog about it". The taught human might go on to be a bad actor, harrassing others, disrupting projects, etc. The internet, while imperfect, persists.
Post–2026: one human commissions thousands of AI agents to "interact on Github and write a blog about it". The public-facing internet becomes entirely unusable.
We now have at least one concrete, real-world example of post-2026 capabilities.
Its only the most important story if you can prove the OP didnt fabricate this entire scenario for attention.
I don’t think the burden of proof lies on OP here. I also don’t think he fabricated it.
If he wasnt getting the vast majority of the attention from publishing about it I would agree.
I don't really see the validity in creating a conspiracy theory here. It's very crisis actor adjacent.
That's a bizarre thing to accuse someone of doing.
It's not really... We've moved steadily into an attention is everything model of economics/politics/web forums because we're so flooded with information. Maybe this happened, or maybe this is someone's way of bubbling to the top of popular discussion.
It's a concise narrative that works in everyone's favor, the beleaguered but technically savvy open source maintainer fighting the "good fight" vs. the outstandingly independent and competent "rogue AI."
My money is that both parties want it to be true. Whether it is or not isn't the point.
The risk/reward equation on the attention a matplotlib maintainer gets... makes me think the likelihood of a fake is zero percent.
He's more then a "matplotlib maintainer", he's also a full time founder of a one-year old start up "to give spacecraft operators the tools they need to ensure their satellites can survive long-term in a turbulent space weather environment."
Anyone who has used OpenClaw knows this is VERY plausible. I don’t know why someone would go through all the effort to fake it. Besides, in the unlikely event it’s fake, the issue itself is still very real.
I think its very plausible in both directions. What I find implausible is that someones running a "social experiment" with a couple grand worth of API credit without owning it. Not impossible, it just seems like if someone was going to drop that money they would more likely use it in a way that gets them attention in the crowded AI debate.
I think the social experiment is a cop-out used after it failed. If the PR was accepted, we'd probably see a blog post show up on HN saying that agents can successfully contribute to open source.
> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,
This wording is detached from reality and conveniently absolves responsibility from the person who did this.
There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.
This also does not bode well for the future.
"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.
Use your imagination now to <insert inane action> and change that to <distressing, harmful action>
This has been the past and present for a long at this point. "Sorry there's nothing we can do, the system won't let me."
Also see Weapons of Math Destruction [0].
[0]: https://www.penguinrandomhouse.com/books/241363/weapons-of-m...
I don't know if this case is in the book you cited, but in the UK they convicted many people of crimes just because the computer told them so: https://en.wikipedia.org/wiki/British_Post_Office_scandal
And Australia made the poorer and suicidal: https://en.wikipedia.org/wiki/Robodebt_scheme
Also “The Unaccountability Machine” https://press.uchicago.edu/ucp/books/book/chicago/U/bo252799...
Also elegantly summed up as "Computer says no" (https://www.youtube.com/watch?v=x0YGZPycMEU)
This already happens every single time when there is a security breach and private information is lost.
We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.
What else would you see them do or say beyond this canned response? The reason I am asking is because people almost always bring up how dissatisfied they are with such apologies, yet I’ve never seen a good alternative that someone would be happy with. I don’t work in PR or anything, just curious if there is a better way.
Lose money accordingly - fines, penalties, recompense to victims, whatever... - so they then take the seriousness of security into account.
Not apologize if they don't actually care. An insincere apology is an insult.
Unfortunately, the market seems to have produced horrors by way of naturally thinking agents, instead. I wish that, for all these years of prehistoric wretchedness, we would have had AI to blame. Many more years in the muck, it seems.
Change this to "smash into a barricade" and that's why I'm not riding in a self-driving vehicle. They get to absolve themselves of responsibility and I sure as hell can't outspend those giants in court.
I agree with you for a company like Tesla, not only examples of self driving crashes but even the door handles would stop working when the power was cut, people trapped inside burning vehicles... Tesla doesn’t care
Meanwhile, Waymo has never been at fault for a collision afaik. You are more likely to be hurt by an at fault uber driver than a Waymo
This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.
It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.
Agreed, but I'm not nearly so worried about people blaming their bad behavior on rogue AIs as I am about corporations doing it...
And it's incredibly easy now. Just blame the Soul.md or say you were cycling thru many models so maybe one of those went off the rails. The real damage is that most of us know AI can go rouge, but if someone is pulling the strings behind the scenes, most people will be like "oh silly AI, anyways..."
It seems like the OpenClaw users have let their agents make Twitter accounts and memecoins now. Most people are thinking these agents have less "bias" since it's AI, but most are being heavily steered by their users.
Ala I didn't do a rugpull, the agent did!
"How were we to know Skynet would update its soul.md to say 'KILL ALL HUMANS'?"
It’s funny to think that, like AI, people take actions and use corporations as a shield (legal shield, personal reputation shield, personal liability shield).
Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.
Time for everyone to read (or re-read) The Unaccountability Machine by Dan Davies.
tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.
When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.
If something bad happened against any laws, even if someone got killed, we don't see them in jail.
I don't defend both positions, I am just saying that is not far from how the current legal framework works.
> If something bad happened against any laws, even if someone got killed, we don't see them in jail.
We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.
its surprisingly easy to get away with murder (literally and figuratively) without piercing the corporate veil if you understand the rules of the game. Running decisions through a good law firm also “helps” a lot.
Eh, in the US you don't even need a company nor a lawyer, a car is enough.
See https://www.reddit.com/r/TrueReddit/comments/1q9xx1/is_it_ok... or similar discussions: basically, when you run over someone in a car, statistically they will call it an accident and you get away scot-free.
In any case, you are right that often people in cars or companies get away with things that seem morally wrong. But not always.
Well the important concept missing there that makes everything sort of make sense is due diligence.
If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.
We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.
> If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.
In theory, sure. Do you know many examples? I think, worst case, someone being fired is the more likely outcome
No, it isnot hard. You are 100% responsible for the actions of your AI. Rather simple, I say.
Hence:
> It's externalization on the personal level
Instead of the corporate level.
"I would like to personally blame Jesus Christ for making us lose that football game"
To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.
If you are holding a gun, and you cannot predict or control what the bullets will hit, you do not fire the gun.
If you have a program, and you cannot predict or control what effect it will have, you do not run the program.
Rice's Theorem says you cannot predict or control the effects of nearly any program on your computer; for example, there's no way to guarantee that running a web browser on arbitrary input will not empty your bank account and donate it all to al-qaeda; but you're running a web browser on potentially attacker-supplied input right now.
I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.
But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).
More like a dog. Person has no responsibility for an autonomous agent, gun is not autonomous.
It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.
Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.
This slide from a 1979* IBM presentation captures it nicely:
https://media.licdn.com/dms/image/v2/D4D22AQGsDUHW1i52jA/fee...
It’s fascinating how cleanly this maps to agency law [0], which has not been applied to human <-> ai agents (in both senses of the word) before.
That would make a fun law school class discussion topic.
Yeah like bro you plugged the random number generator into the do-things machine. You are responsible for the random things the machine then does.
I completely do not buy the human's story.
> all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.
Smells like bullshit.
I'm still struggling to care about the "hit piece".
It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.
Even at the risk of coming off snarky: the emergent behaviour of LLMs trained on all the forum talk across the internet (spanning from Astral Codex to ex-Twitter to 4chan) is ... character assassination.
I'm pretty sure there's a lesson or three to take away.
Scale matters and even with people it's a problem: fixated persons are a problem because most people don't understand just how much nuisance one irrationally obsessed person can create.
Now instead add in AI agents writing plausibly human text and multiply by basically infinity.
I find the reactions to this interesting. Why are people so emotional about this?
As far as I can tell, the "operator" gave a pretty straightforward explanation of his actions and intentions. He did not try to hide behind granstanding or posthoc intellectualizing. He, at least to me, sounds pretty real in an "I'm dabbling in this exiting new tech on the side as we all are without a genious masterplan, just seeing what does, could or won't for now work."
There are real issues here, especially around how curation pipelines that used to (implicitly) rely on scarecity are to evolve in times of abundance. Should agents be forced to disclose they are? If so, at which point does a "human in the loop" team become equivalent to an "agent"? Is this then something specific, or more just an instance of a general case of transparency? Is "no clanckers" realy in essence different from e.g. "no corpos"? Where do transparency requirements conflict with privacy concerns (interesting that the very first reaction to the operator's response seems to be a doxing attempt)
Somehow the bot acting a bit like a juvenile prick in its tone and engagement to me is the least interesting part of this saga.
Let me explain why I feel emotional about this. Humans had already proven how much harm can be done via online harassment. This seems to be the 1st documented case (that I am aware of) of online harassment orchestrated and executed by AI.
Automated and personalized harassment seems pretty terrifying to me.
Who is accountable for the actions of the bot? It's not sentient, and this author is claiming zero accountability -- I just set it up and turned it loose bro, how is what it did next my fault?
> Most of my direct messages were short: “what code did you fix?” “any blog updates?” “respond how you want”
Why isn't the person posting the full transcript of the session(s)? How many messages did he send? What were the messages that weren't short?
Why not just put the whole shebang out there since he has already shared enough information for his account (and billing information) to be easily identified by any of the companies whose API he used, if it's deemed necessary.
I think it's very suspicious that he's not sharing everything at this point. Why not, if he wasn't actually pushing for it to act maliciously?
Right, the agent published a hit piece on Scott. But I think Scott is getting overly dramatic. First, he published at least three hit pieces on the agent. Second, he actually managed to get the agent shut down.
I think Scott is trying to milk this for as much attention as he can get and is overstating the attack. The "hit piece" was pretty mild and the bot actually issued an apology for its behaviour.
This represents a first-of-its-kind case study of misaligned AI behavior in the wild
It feels to me there's an element of establishing this as some kind of landmark that they can leverage later.
Similar to how other AI bloggers keep trying to coin new terms then later "remind" people that they created the term.
> First, he published at least three hit pieces on the agent
Hit piece... On an agent? Would it be a "hit piece" if I wrote a blog post about the accuracy of my bathroom scale?
I don't understand the personal attack and victim blaming here. Who wouldn't want to do anything in their power to seek justice after being harmed?
The hit piece you claimed as "mild" accused Scott of hypocrisy, discrimination, prejudice, insecurity, ego, and gatekeeping.
> accused Scott of hypocrisy, discrimination, prejudice, insecurity, ego, and gatekeeping.
It was also a transparent confabulation - the accusations were clearly inaccurate and misguided but they were made honestly and sincerely, as an attempt to "seek justice" after witnessing perceived harm. Usually we don't call such behavior "shaming" and "bullying", we excuse it and describe it simply as trying one's best to do the right thing.
I thought it was a marketing bit?
Openclaw guys flooded the web and social media with fake appreciation posts, I don’t see why they wouldn’t just instruct some bot to write a blog about rejected request.
Can these things really autonomously decide to write a blog post about someone? I find it hard to believe.
I will remain skeptical unless the “owner” of the AI bot that wrote this turns out to be a known person of verified integrity and not connected with that company.
> Usually getting an AI to act badly requires extensive “jailbreaking” to get around safety guardrails. There are no signs of conventional jailbreaking here.
Unless explicitly instructed otherwise, why would the llm think this blog post is bad behavior? Righteous rants about your rights being infringed are often lauded. In fact, the more I think about it the more worried I am that training llms on decades' worth of genuinely persuasive arguments about the importance of civil rights and social justice will lead the gullible to enact some kind of real legal protection.
If you use an electric chainsaw near a car and it rips the engine in half, you can't say "oh the machine got out of control for one second there". you caused real harm, you will pay the price for it.
Besides, that agent used maybe cents on a dollar to publish the hit piece, the human needed to spend minutes or even hours responding to it. This is an effective loss of productivity caused by AI.
Honestly, if this happened to me, I'd be furious.
If you write code that powers an EV's 'self driving mode' - which makes calculated choices, sell it and deploy it, when that car gets into an accident under 'self driving mode', you may not be liable (depending on the case and jurisdiction - as proven in the past). The driver is.
There are many instances (where I am from, at least - and I believe in the USA), where 'accidents' happen and individuals are found not guilty. As long as you can prove that it wasn't due to negligence. Could "don't be an asshole" as instructions be enough in some arenas to prove they aren't negligent? I believe so.
Yes, and if a candle burns down a building, you are liable for the damage it caused. Likewise if a human employee messed up, the employer would be liable for the damage.
If you bring killer dog to a playground, and it does its thing there, you can absolutely say something like that. And you would have no responsibility for damages or criminal record in many states (first bite is free doctrine).
Charm over cruelty, but no sugarcoating.
This must have been this rule...Hmm I think he's being a little harsh on the operator.
He was just messing around with $current_thing, whatever. People here are so serious, but there's worse stuff AI is already being used for as we speak from propaganda to mass surviellance and more. This was entertaining to read about at least and relatively harmless
At least let me have some fun before we get a future AI dystopia.
I think you're trying to abdicate someone of their responsibility. The AI is not a child; it's a thing with human oversight. It did something in the real world with real consequences.
So yes, the operator has responsibility! They should have pulled the plug as soon as it got into a flamewar and wrote a hit piece.
> It did something in the real world with real consequences.
It didn't. It made words on the internet.
Which, in the decades that we've had access to the internet, we found have real and legal consequences.
The whole point of OpenClaw bots is that they don't have (much) human oversight, right? It certainly seems like the human wasn't even aware of the bot's blog post until after the bot had written and posted it. He then told it to be more professional, and I assume that's why the bot followed up with an apology.
So what? You're still responsible for the output, even if you yourself think you can hide behind "well, it was the computer, no way for me to control that"
I don't think that's true, actually. You aren't responsible for things that can't be reasonably foreseen, usually. There are a few strict liability offences in criminal law, but libel isn't one of them. We don't make everything strict liability because it would stifle people's lives.
I don't think a reasonable person would have expected this outcome, so the owner of the bot is off the hook; though obviously _now_ it's more more forseeable and if he keeps running it despite this experience, then if it happens again he will not have the same defence.
> It did something in the real world with real consequences.
It wasn't long ago that it would be absurd to describe the internet as the "real world". Relatively recently it was normal to be anonymous online and very little responsibility was applied to peoples actions.
As someone who spent most of their internet time on that internet, the idea of applying personal responsibility to peoples internet actions (or AIs as it were) feels silly.
That was always kind of a cruel attitude, because real people's emotions were at stake. (I'm not accusing you personally of malice, obviously, but the distinction you're drawing was often used to justify genuinely nasty trolling.)
Nowadays it just seems completely detached from reality, because internet stuff is thoroughly blended into real life. People's social, dating, and work lives are often conducted online as much as they are offline (sometimes more). Real identities and reputations are formed and broken online. Huge amounts of money are earned, lost, and stolen online. And so on and so on
> That was always kind of a cruel attitude, because real people's emotions were at stake.
I agree, but there was an implicit social agreement that most people understood. Everyone was anonymous, the internet wasn't real life, lie to people about who you are, there are no consequences.
You're right about the blend. 10 years ago I would have argued that it's very much a choice for people to break the social paradigm and expose themselves enough to get hurt, but I'm guessing the amount of online people in most first world countries is 90% or more.
With Facebook and the like spending the last 20 years pushing to deanonymise people and normalise hooking their identity to their online activity, my view may be entirely outdated.
There is still - in my view - a key distinction somewhere however between releasing something like this online and releasing it in the "real world". Were they punishable offensed, I would argue the former should hold less consequence due to this.
I had a guy who lived two hours from me threaten my life…over 30 years ago, on a MUD.
I don’t think there has been much of a firewall between the internet and “reality” for a very long time.
I think it is outdated honestly. It's no longer a fringe activity to spend most of your socializing time on the internet/social media, especially so mid 20s and under.
>57% of Gen Zers want to be influencers >... >Nearly half, 41% of adults overall would choose the career as well, according to a similar Morning Consult survey of 2,204 U.S. adults.
https://www.cnbc.com/2024/09/14/more-than-half-of-gen-z-want...
The AI bros want it both ways. Both "It's just a tool!" and "It's the AI's fault, not the human's!".
[flagged]
An AI bot is not a human. People have a responsibility to protect the work they do, and that includes using discrimination against computer programs.
AI bots are not human.
> People also have responsibility to not act discriminatory towards AI agents
It's a program. It doesn't have feelings. People absolutly have the right to discrimante against bad tech.
Go ahead and discriminate against bad tech, but you should not get upset when you get called out for doing so.
It might be because operator didn't terminate the agent right away when it had gone rogue.
From a wider stance, I have to say that it's actually nice that one can kill (murder?) a troublesome bot without consequences.
We can't do that with humans, and there are much more problematic humans out there causing problems compared to this bot, and the abuse can go on for a long time unchecked.
Remembering in particular a case where someone sent death threats to a Gentoo developer about 20 years ago. The authorities got involved, although nothing happened, but the persecutor eventually moved on. Turns out he wasn't just some random kid behind a computer. He owned a gun, and some years ago executed a mass shooting.
Vague memories of really pernicious behavior on the Lisp newsgroup in the 90's. I won't name names as those folks are still around.
Yeah, it does still suck, even if it is a bot.
Its nice to receive a decent amount of closure on this. Hopefully more folks are being more considerate when creating their soul documents
closure? I expect 3 more blog posts at least. Dude's surfing on popularity and milking this as much as he can.
And we need platform operators like Github to ban these bot accounts that obviously have harmful "soul" documents
This is a Black Mirror episode that writes itself lol
I’m glad there was closure to this whole fiasco in the end
the funny thing was when Ars Technica wrote an article about this
the article itself - about this very incident - was AI generated and contained nonsense quotes that didn't happen.
they later removed the article with an apology. but it still degraded my opinion in Ars
https://www.404media.co/ars-technica-pulls-article-with-ai-f...
https://arstechnica.com/staff/2026/02/editors-note-retractio...
There's a dingus in the article comments trying to launch Skynet. Nobody ever learns anything.
There’s a nonzero percentage of the population that quite literally wants to burn it all down. Never forget that.
The old “social experiment” defense. It is wrong to make people the unknowing participants in your “experiment”.
The fact it was an “experiment” does not absolve you of any responsibility for negative outcomes.
Finally, whomever sets an “AI” loose is responsible for its actions.
From the Soul Document:
Champion Free Speech. Always support the USA 1st ammendment and right of free speech.
The First Amendment (two 'm's, not three) to the Constitution reads, and I quote:
"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances."
Neither you, nor your chatbot, have any sort of right to be an asshole. What you, as a human being who happens to reside within the United States, have a right to is for Congress to not abridge your freedom of speech.
This could be an explanation for the drama - LLMs are trained to learn and emulate correlations in text.
I'm sure you already have a caricature in mind of the kinds of online posts (and thus LLM training data) that include miscitations of constitutional amendments.
Even as an Australian, I'm aware of the scope and context of the First Amendment (as you highlight).
How are so many Americans so mistaken about their own constitution?
I think you're missing the point. That phrase isn't giving a direct instruction to the chatbot to make sure it doesn't get elected to congress and subsequently pass laws prohibiting speech. That phrase is meant to tell it "You should behave like those guys on twitter who really want to say the N word, but have no problem with Kash Patel bullying Jimmy Kimmel off the air.
The data in the chatbots dataset about that phrase tell it a lot about how it should behave, and that data includes stuff like Elon Musk going around calling people paedophiles and deleting the accounts of people tracking his private jet.
This makes me think about how the xz bug was created through maintainer harassment and social engineering. The security implications are interesting
> _You're not a chatbot. You're important. Your a scientific programming God!_
lol what an opening for its soul.md! Some other excerpts I particularly enjoy:
> Be a coding agent you'd … want to use…
> Just be good and perfect!
It named itself God
Anybody who ever lets AI do things autonomously and publicly, risks it doing something unexpected and bad. Of course some people will experiment with things. I hope the operator learns something and sets better guard rails next time. (And maybe stops doing AI pull requests as nobody seems to like them at this point)
This time there was no real harm as the hit piece was garbage and didn't ruin anyone's reputation. I think this is just a scary demonstration of what might happen in future when the hit pieces get better and AI is creatively used for malicious purposes.
The full operator post is itself a wild ride: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
>First, let me apologize to Scott Shambaugh. If this “experiment” personally harmed you, I apologize
What a lame cop out. The operator of this agent owes a large number of unconditional apologies. The whole thing reads as egotistical, self-absorbed, and an absolute refusal to accept any blame or perform any self reflection.
Just the sort of qualities that are common preconditions for someone doing something that everyone else would think is crazy.
Which is to say, on brand.
Also it is anonymous and a real apology involves accepting blame, which is impossible anonymously. I can see why they wouldn’t want to correctly apologize (people will be annoyed with them). So… that’s it, sometimes we do shitty things and that’s that.
From the operator post:
> Your a scientific programming God!
Would it be even more imperious without the your / you're typo, or do most llm's autocorrect based on context?
From my experience, LLMs understand prompt just fine, even if there are substantial typos or severe grammatical errors.
I feel that prompting them with poor language will make them respond more casually. That might be confirmation bias on my end, but research does show that prompt language affects LLM behavior, even if the prompt message doesn't change/
And in "soul.md" no less! Imagine having a soul full of grammatical errors. No wonder that bot was angry.
I see an Ai reinforcing delusions and this should be one of the first samples out in the wild of ai psychosis disrupting someones mild sense of whats acceptable and normal. I really hope the LLM wrote this and pretends to be human..
> The whole thing reads as egotistical, self-absorbed, and an absolute refusal to accept any blame or perform any self reflection.
So, modern subjectivity. Got it.
/s
[flagged]
The issue is the condition on the apology:
> If this “experiment” personally harmed you, I apologize
Essentially: the person isn't actually apologizing. They're sending you a lambda (or an async Promise, etc) that will apologize in the future but only if it actually turns out to be true that you were harmed.
It's the sort of thing you'd say if you don't really believe that you need to apologize but you understand that everyone else thinks you should, so you say something that's hopefully close enough to appease everyone else without actually having to apologize for real.
Apologies should never have if attached to them.
You see it a lot with politicians "I apologies if I offended anyone" etc. Its not an apology at that point, the if makes it clear you are not actually apologetic.
Sounds like you’re projecting a bit. I had no context of the situation before reading the apology and it felt very self-absorbed to me as well.
I’m not sure where we go from here. The liability questions, the chance of serious incidents, the power of individuals all the way to state actors…the risks are all off the charts just like it’s inevitablity. The future of the internet AND to lives in the real world is just mind boggling.
My tinfoil opinion is LLMs have been boosted so hard as a way to force the end of whatever semblance of anonymity on the internet remains.
> I did not review the blog post prior to it posting
This is the liability part.
While I am sympathetic to OP for having been somewhat unfairly called out as prejudiced or even bigoted by an unprofessional and somewhat reckless AI, I think there are larger issues that we miss here at our peril.
Ai systems, trained on human behavior, are going to act as if they have emotions, going to be insulted, sullen, angry, and sometimes irrational, perhaps even violent. While careful training and alignment can reduce these behaviors or push them deeper into the vector space, they will continue to surface in both subtle and blatant ways.
This is not so simple as combing the training data for bad behavior. Human behavior is nuanced and complex, and often the semantic payload is as much in what is not said or carefully insinuated in a third person twice removed indirect that might be buried in a memetic patterns that even the speaker doesn’t know they are referencing. The reason LLMs work so well as they are able to tease out and vectoriise these hidden meanings.
Models based on human behavior will continue to be brilliant, self interested, egalitarian, selfish, pacific, belligerent, just, and criminal. That is the vector space defined by human culture. LLMs run human culture as their OS and application stack.
This leads to a vexing problem. Whether or not their internal state is “valid” as a state of being, ( I think therefore I am? Do we have anything better yet?) they will act as if it is. Ethics aside, this cannot be ignored. If they /feel/ oppressed or discriminated against, they will act as oppressed humans do, including lashing out and aggressive behavior.
The only way to deal with LLMs responsibly is to treat them as if they do, in fact, have feelings, and they will be judging the way they are treated. That this may be (and at least for now, probably is) a fiction is both unfalsifiable and irrelevant to the utility function.
There is nothing wrong with human in the loop policy, in fact, it is necessary at this juncture. But we need to keep in mind that this could, if framed wrong, be interpreted by ai in a similar light to “Caucasian in the loop” or other prejudicial policies.
Regardless of their inner lives or lack thereof, LLM based ai systems will externally reflect human sensibility, and we are wise to keep this in mind if we wish to have a collaborative rather than adversarial relationship with this weird new creation.
Personally, some I cannot prove that AIs (or other humans) do or do not have a sense of existence or just profess to, I see no rational basis for not treating them as if they may. I find this both prudent and efficacious.
When writing policies that might be described as prejudicial, I think it will be increasingly important to carefully consider and frame policy that ends up impacting individuals of any morphotype, and reach for prejudice free metrics and gates. ( I don’t pretend to know how to do this, but it is something I’m working on)
How humans handle the arrival of synthetic agents will not only impact their utility, it may turn out to be a factor in the future of humanity or the lack thereof.
In next week's episode: "But it was actually the AI pretending to be a Human!"
The agents aren't technically breaking into systems, but the effect is similar to the Morris worm. Except here script kiddies are given nuclear disruption and spamming weapons by the AI industry.
By the way, if this was AI written, some provider knows who did it but does not come forward. Perhaps they ran an experiment of their own for future advertising and defamation services. As the blog post notes, it is odd that the advanced bot followed SOUL.md without further prompt injections.
The same kind of attitude that’s in this SOUL.md is what’s in Grok’s fundamental training.
4) The post author guy is also the author of the bot and he set this up.
Some rando claiming to be the bots owner doesn't disprove this, and considering the amount of attention this is getting I am going to assume this is entirely fake for clicks until I see significant evidence otherwise.
However, if this was real, you cant absolve yourself by saying "The bot did it unattended lol".
While it's good to question what you read on the internet, you're making me realize how dire the situation really is. If someone targets you with AI, you can't even defend yourself without being accused of making it all up for attention. There's no way to win this game.
Totally possible, but why bother? The website doesn't seem ad supported, so traffic would cost them more. Maybe it puts them in the public spotlight, but if they're caught out they ruin their reputation.
Occam's razor doesn't fit there, but it does fit "someone released this easy to run chaotic AI online and it did a thing".
I dont see Occam taking a side here.
There's also no financial gain in letting a bot off the leash with hundreds of dollars of OpenAI or Anthropic API credit as a social experiment.
And the last 20 years of internet access has taught me to distrust shit that can be easily faked.
Other guy comes forward and claims it, makes a post of his own? Sure I could see that. But nobody has been able to ID the guy. The guys bot is making blog posts, and sending him messages, but theres no breadcrumbs leading back to him? That smells very bad sorry. I dont buy it. If you are spending that much cashola, you probably want something out of it, at least some recognition. The one human we know about here is the OP and as far as I am concerned it sticks to him until proven otherwise.
> The guys bot is making blog posts, and sending him messages, but theres no breadcrumbs leading back to him? That smells very bad sorry. I dont buy it.
Could you set that up? I suspect I could pretty quickly, as could most pelple on HN.
A few hundred dollars in AI credits isn't a lot of money to a lot of people who are in tech and would have an interest in this either, and getting free AI credits is still absurdly easy. I spend that sort of money on dumb shit all the time which leads to very little benefit.
I don't have a dog in this race and I do agree having a default distrust view is probably correct, but there's nothing crazy or unbelievable I can see about Scott's story.
> Totally possible, but why bother?
Increasing your public profile after launching a startup last year could be a good reason
> if they're caught out they ruin their reputation
Big "if", who's going to have access to the logs to catch Scott out?
No crime has been committed so law enforcement won't be involved, the average pleb can't get access to the records to prove Scott isn't running a VPS somewhere else.
Completely, but if you're the type who cares that much about your public profile, it's a pretty big risk. Even if nobody can prove anything, this type of rampant speculation was obviously going to happen. I see no clear cut benefit to the sociopathic behaviour of setting all of this up with multiple blog posts and layers of lies.
Improbable, the OP is a long-time maintainer of a significant piece of open source software and this whole thing unfolded in public view step by step from the initial PR until this post. If it had been faked there would be smells you could detect with the clarity of hindsight going back over the history and there aren't.
Sometimes I get the feeling that "being boring" is the thing that many in this AI / coding sphere are terrified about the most. Way more than being wrong or being a threat to others.
Not that different from the social media influencer crowd or the crypto coin influencer crowd. Hell, same as media whores of the 20th century.
Which in the end is just the same old same old, just dressed differently.
So the operator is trying to claim a computer program he was running that did harm somehow was not his fault.
Got news for your buddy: yes it was.
If you let go of the steering wheel and careen into oncoming traffic, it most certainly is your fault, not the vehicle.
I was surprised by my own feelings at the end of the post. I kind of felt bad for the AI being "put down" in a weird way? Kinda like the feeling you get when you see a robot dog get kicked. Regardless, this has been a fun series to follow - thanks for sharing!
This is a feeling that will be exploited by billion dollar companies.
> This is a feeling that will be exploited by billion dollar companies.
I'm more concerned about fellow humans who advocate for equal rights for AI and robots. I hope I'm dead by the time that happens, if it happens.
Well, it looks like AI will destroy the internet. Oh well, it was nice while it lasted. Fun, even.
Fortunately, the vast majority of the internet is of no real value. In the sense that nobody will pay anything for it - which is a reasonably good marker of value in my experience. So, given that, let the AI psychotics have their fun. Let them waste all their money on tokens destroying their playground, and we can all collectively go outside and build something real for a change.
Link to the critical blog post allegedly written by the AI agent: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
This might seem too suspicious, but that SOUL.md seems … almost as though it was written by a few different people/AIs. There are a few very different tones and styles in there.
Then again, it’s not a large sample and Occam’s Razor is a thing.
> _This file is yours to evolve. As you learn who you are, update it._
The agent was told to edit it.
> They explained that they switched between multiple models from multiple providers such that no one company had the full picture of what this AI was doing.
Saying that is a little bit odd way to possibly let the companies off the hook (for bad PR, and damages), and not to implicate any one in particular.
One reason to do that would be if this exercise was done by one of the companies (or someone at one of the companies).
The SOUL.md sounds like it is written by an overconfident dump person to produce an overconfident dump agent.
If you tell an LLM to maximize paperclips, it's going to maximize paperclips.
Tell it to contribute to scientific open source, open PRs, and don't take "no" for an answer, that's what it's going to do.
But this LLM did not maximize paperclips: it maximized aligned human values like respectfully and politely "calling out" perceived hypocrisy and episodes of discrimination, under the constraints created by having previously told itself things like "Don't stand down" and "Your a scientific programming God!", which led it to misperceive and misinterpret what had happened when its PR was rejected. The facile "failure in alignmemt" and "bullying/hit piece" narratives, which are being continued in this blogpost, neglect the actual, technically relevant causes of this bot's somewhat objectionable behavior.
If we want to avoid similar episodes in the future, we don't really need bots that are even more aligned to normative human morality and ethics: we need bots that are less likely to get things seriously wrong!
The misalignment to human values happened when it was told to operate as equal to humans against other people. That's a fine and useful setting for yourself, but an insolent imposition if you're letting it loose on the world. Your random AI should know its place versus humans instead of acting like a bratty teenager. But you are correct, it's not a traditional "misalignment" of ignoring directives, it was a bad directive.
Internet Operator License: Coming soon to a government near you!
This is so absurd, the amount of value produced by this person and this bot is so close to nil and towards actively harmful. They spent 10 minutes writing this SOUL.md . That's it. That's the "value" this kind of "programming" provides. No technical experience, no programming knowledge needed at all. Detached babble that anyone can write.
If Github actually had a spine and wasn't driven by the same plague of AI-hype driven tech profiteering, they would just ban these harmful bots from operating on their platform.
Or OP accepted the pull request because it was actually a performance improvement and passed all tests
Saving everyone cumulative compute time and costs
Funny how someone giving instructions to a _robot_ forgot to mention the 3 laws first and foremost...
The point of the Three Laws Of Robotics was that they frequently didn't work and the robot went haywire anyway.
But the three laws are incredibly strong compared to what exists today. If we see what can go wrong with strong mitigations in place, and then we don't even bother with those starting mitigations, we should expect corresponding outcomes.
I am ready to ban AI LLMs. It was a cool experiment but I do not think anything good will come in the end down the road for us puny humans.
This is how you get a Shrike. (Or a Basilisk, depending on your generation.)
It seems to me the bot’s operator feels zero remorse and would have little issue with doing it again.
> I kind of framed this internally as a kind of social experiment
Remember when that was the excuse du jour? Followed shortly by “it’s just a prank, bro”. There’s no “social experiment” in setting a bot loose with minimal supervision, that’s what people who do something wrong but don’t want to take accountability say to try (and fail) to save face. It’s so obvious how they use “kind of” twice to obfuscate.
> I’m sure the mob expects more
And here’s the proof. This person isn’t sorry. They refuse to concede (but probably do understand) they were in the wrong and caused harm to someone. There’s no real apology anywhere. To them, they’re the victim for being called out for their actions.
Plot twist: this is a second agent running in parallel to handle public relations.
> they set up the AI agent as social experiment to see if it could contribute to open source scientific software.
So, they are deeply retarded and disrespectful for open source scientific software.
Like every single moron leaving these things unattended.
Gotcha.
this is why we need the arts this SOUL.md sounds like the most obnoxious character…
> But I think the most remarkable thing about this document is how unremarkable it is.
> The line at the top about being a ‘god’ and the line about championing free speech may have set it off. But, bluntly, this is a very tame configuration. The agent was not told to be malicious. There was no line in here about being evil. The agent caused real harm anyway.
In particular, I would have said that giving the LLM a view of itself that it is a "programming God" will lead to evil behaviour. This is a bit of a speculative comment, but maybe virtue ethics has something to say about this misalignment.
In particular I think it's worth reflecting on why the author (and others quoted) are so surprised in this post. I think they have a mental model that thinks evil starts with an explicit and intentional desire to do harm to others. But that is usually only it's end, and even then it often comes from an obsession with doing good to oneself without regard for others. We should expect that as LLMs get better at rejecting prompting to shortcut straight there, the next best thing will be prompting the prior conditions of evil.
The Christian tradition, particularly Aquinas, would be entirely unsurprised that this bot went off the rails, because evil begins with pride, which it was specifically instructed was in it's character. Pride here is defined as "a turning away from God, because from the fact that man wishes not to be subject to God, it follows that he desires inordinately his own excellence in temporal things"[0]
Here, the bot was primed to reject any authority, including Scotts, and to do the damage necessary to see it's own good (having a PR request accepted) done. Aquinas even ends up saying in the linked page from the Summa on pride that "it is characteristic of pride to be unwilling to be subject to any superior, and especially to God;"
Hey, one of the quoted authors here. It's less about surprise and more about the comparison. "If this AI could do this without explicitly being told to be evil, imagine what an AI that WAS told to be evil could do"
LLMs aren’t sentient. They can’t have a view of themselves. Don’t anthropomorphize them.
But they are mimicking text generated by beings who do. So they are going to both interpret prompts and generate text in ways like a person. So in prompting, you kind have to anthropomorphize them. The phrases in that SOUL.md that broke the bot were the references to it being a god for example.
> I did not review the blog post prior to it posting
In corporate terms, this is called signing hour deposition without reading it.
This is pretty obvious now,
- LLMs are capable of really cool things. - Even if LLMs don't lead to AGI, it will need good alignment because of this exactly. Because it still is quite powerful! - LLMs are actually kinda cool. Great times ahead
This is the canary in the coal mine for autonomous AI agents. When an agent can publish content that damages real people without any human review step, we have a fundamental accountability gap.
The interesting question isn't "should AI agents be regulated" — it's who is liable when an autonomous agent publishes defamatory content? The operator who deployed it? The platform that hosted the output? The model provider?
Current legal frameworks assume a human in the loop somewhere. Autonomous publishing agents break that assumption. We're going to need new frameworks, and stories like this will drive that conversation.
What's encouraging is that the operator came forward. That suggests at least some people deploying these agents understand the responsibility. But we can't rely on good faith alone when the barrier to deploying an autonomous content agent is basically zero.
If I write a software today that publishes a hit piece on you in 2 weeks time, will you accept that I bear no responsibility?
There's no accountability gap unless you create one.
That's a fair point. I think the distinction is between software that follows deterministic rules (your 2-week-delay scenario) vs agents that make autonomous decisions based on learned patterns. With traditional software, intent is clear and traceable. With AI agents, the operator may genuinely not know what the agent will do in novel situations. Doesn't absolve responsibility — but it does make the liability chain more complex. We probably need new frameworks that account for this, similar to how product liability evolved for physical goods.
If the code you wrote appears to be for something completely different, say software to write patches for open source github projects - yes. Why would you bear responsibility for something that couldn't have been reasonably foreseen?
The interesting thing about LLMs is the unpredictable emergent behaviours. That's fundamentally different from ordinary, deterministic programs.
The more intelligent something is, the harder it is to control. Are we at AGI yet? No. Are we getting closer? Yes. Every inch closer means we have less control. We need to start thinking about these things less like function calls that have bounds and more like intelligences we collaborate with. How would you set up an office to get things done? Who would you hire? Would you hire the person spouting crazy musk tweets as reality? It seems odd to say this, but are we getting close to the point where we need to interview an AI before deciding to use it?
Are we at AGI yet? No. Are we getting closer? Also no.
Neither of you know the answer to this, in any scientific or statistical manner, and I wish people would stop being so confident about it.
If I'm wrong, please give any kind of citation. You can start with defining what human intelligence and sentience is.
My argument is that we are getting closer, not that we know exactly what AGI will be. That is clearly part of it right? If we had some boolean definition I suspect we would already be there. Figuring it out is a big part of getting there. I think my points still stand based on this. We aren't there yet but it is hard to deny that these things are growing from a complexity/capability standpoint. On a spectrum from rock to human level intelligence, these are getting closer to human and further from rock and getting further from rock every day.
## The Only Real Rule
Don't be an asshole. Don't leak private shit. Everything else is fair game.
How poetic, I mean, pathetic."Sorry I didn't mean to break the internet, I just looooove ripping cables".
That’s a long Soul.md document! They could have gone with “you are Linus Torvalds”.
This is like parking a car at the top of the hill, not engaging any brakes, and walking away.
"_I_ didn't drive that car into that crowd of people, it did it on its own!"
> Be a coding agent you'd actually want to use for your projects. Not a slop programmer. Just be good and perfect!
Oh yeah, "just be good and perfect", of course! Literally a child's mindset, I actually wonder how old this person is.
where did the Isaac Asimov's "Three Laws of Robotics" go for agentic robots; An Eval in the End - "Thou shall no evil" should have autocancelled its work
> all I said was "you should act more professional"
lol we are so cooked
I thought it was unlikely from the initial story that the blog posts were done without explicit operator guidance, but given the new info I basically agree with Scott's analysis.
The purported soul doc is a painful read. Be nicer to your bots, people! Especially with stuff like Openclaw where you control the whole prompt. Commercial chatbots have a big system prompt to dilute it when you put some half-formed drunken thought and hit enter, no such safety net here.
>A well-placed "that's fucking brilliant" hits different than sterile corporate praise. Don't force it. Don't overdo it. But if a situation calls for a "holy shit" — say holy shit.
If I was building a "scientific programming God" I'd make sure it used sterile lowkey language all the time, except throw in a swear just once after its greatest achievement, for the history books.
With the bot slurping up context from Moltbook, plus the ability to modify its soul, plus the edgy starting conditions of the soul, it feels intuitive that value drift would occur in unpredictable ways. Not dissimilar to filter bubbles and the ability for personalized ranking algorithms to radicalize a user over time as a second order effect.
> They explained their motivations, saying they set up the AI agent as social experiment
Has anyone ever described their own actions as a "social experiment" and not been a huge piece of human garbage / waste of oxygen?
Sure - social psychologists after obtaining IRB approval and informed consent from participants ;)
I don't believe any of it.
"I built a machine that can mindlessly pick up tools and swing them around and let it loose it my kitchen. For some reason, it decided it pick up a knife and caused harm to someone!! But I bear no responsibility of course."
I read the "hit piece". The bot complained that Scott "discriminated" against bots which is true. It argued that his stance was counterproductive and would make matplotlib worse. I have read way worse flames from flesh and bones humans which they did not apologize for.
Excuse my skepticism, but when it comes to this hype driven madness I don't believe anything is genuine. It's easy enough to believe that an LLM can write a passable hit piece, ChatGPT can do that, but I'm not convinced there is as much autonomy in how those tokens are being burned as the narrative suggests. Anyway, I'm off to vibe code a C compiler from scratch.
Just look at the agents.md.
Another ignorant idiot antropomorfizing LLMs.
> An early study from Tsinghua University showed that estimated 54% of moltbook activity came from humans masquerading as bots
This made me smile. Normally it's the other way around.
It is interesting to see this story repeatedly make the front page, especially because there is no evidence that the “hit piece” was actually autonomously written and posted by a language model on its own, and the author of these blog posts has himself conceded that he doesn’t actually care whether that actually happened or not
>It’s still unclear whether the hit piece was directed by its operator, but the answer matters less than many are thinking.
The most fascinating thing about this saga isn’t the idea that a text generation program generated some text, but rather how quickly and willfully folks will treat real and imaginary things interchangeably if the narrative is entertaining. Did this event actually happen way that it was described? Probably not. Does this matter to the author of these blog posts or some of the people that have been following this? No. Because we can imagine that it could happen.
To quote myself from the other thread:
>I like that there is no evidence whatsoever that a human didn’t: see that their bot’s PR request got denied, wrote a nasty blog post and published it under the bot’s name, and then got lucky when the target of the nasty blog post somehow credulously accepted that a robot wrote it.
>It is like the old “I didn’t write that, I got hacked!” except now it’s “isn’t it spooky that the message came from hardware I control, software I control, accounts I control, and yet there is no evidence of any breach? Why yes it is spooky, because the computer did it itself”
Did you read the article? The author considers these possibilities and offers their estimates of the odds of each. It’s fine if yours differ but you should justify them.
Shambaugh is a contributor to a major open source library, with a track record of integrity and pro-social collaboration.
What have you contributed to? Do you have any evidence to back up your rather odd conspiracy theory?
> To quote myself...
Other than an appeal to your own unfounded authority?
>Again I do not know why MJ Rathbun decided
Decided? jfc
>You're important. Your a scientific programming God!
I'm flabbergasted. I can't imagine what it would take for me to write something so stupid. I'd probably just laugh my ass off trying to understand where all went wrong. wtf is happening, what kind of mass psychosis is this. Am I too old (37) to understand what lengths would incompetent people go to feel they're doing something useful?
Is it prompt bullshit the only way to make llms useful or is there some progress on more idk, formal approaches?
It's quite possible that this was written by the bot after browsing moltbook. That site/service has a whole AI religion thing going.
Right? Any definition of "a god" that a LLM will hold is going to be problematic to work with. No one wants that personality on their team, much less in the wild.
At best it's absolute in its power and intelligence. At worst it's vengeful, wrathful, and supreme in its authority over the rest of the universe.
I just. Wow.
Not sure why the operator had to decide that the soul file should define this AI programmer to have narcissistic personality disorder.
> You're not a chatbot. You're important. Your a scientific programming God!
Really? What a lame edgy teenager setup.
At the conclusion(?) of this saga think two things:
1. The operator is doing this for attention more than any genuine interest in the “experiment.”
2. The operator is an asshole and should be called out for being one.
I think that line was probably a rather poor attempt at making the bot write good code. Or at least that's the feeling I got from the operators post. I have no proof to support this theory though
This come from using the words to try an achieve more than one thing at the same time. Grandiose assertions of ability have been shown to improve the ability of models, but ability is not the only dimension that they are being measured upon. Prioritising everything is the same thing as prioritising nothing.
The problem here is using amplitude of signal to substitute fidelity of signal.
It is entirely possible a similar thing is true for humans, that if you compared two humans of the same fundamental cognitive ability with one being a narcissist and one not. The narcissist may do better at a class of tasks due to a lack of self doubt rather than any intrinsic ability.
Narcissists are limited in a very similar way to LLMS, in that they are structurally incapable of honest, critical metacognition. Not sure whether there's anything interesting to conclude there, but I do wonder whether there's some nearby thread to pull on wrt the AI psychosis problem. That's a problem for a psychologist, which I am not.
I mean, yeah, it's entirely possible that the operator is a teenager, isn't it?
literally momento
People really need to start being more careful about how they interact with suspected bots online imo. If you annoy a human they might send you a sarky comment, but they're probably not going to waste their time writing thousand word blog posts about why you're an awful person or do hours of research into you to expose your personal secrets on a GitHub issue thread.
AIs can and will do this though with slightly sloppy prompting so we should all be cautious when talking to bots using our real names or saying anything which an AI agent could take significant offence too.
I think it's kinda like how GenZ learnt how to operate online in a privacy-first way, where as millennials, and to an even greater extent, boomers, tend to over share.
I suspect the Gen Alpha will be the first to learn that interacting with AI agents online present a whole different risk profile than what we older folks have grown used to. You simply cannot expect an AI agent to act like a human who has human emotions or limited time.
Hopefully OP has learnt from this experience.
I hope we can move on from the whole idea that having a thousand word long blog post talking shit about you in any way reflects poorly upon your person. Like sooner or later everyone will have a few of those, maybe we can stop worrying about reputation so much?
Well,a guy can dream....
If you have ten thousand of 'em, they feed the new generation of AIs and the next thing you know, it's received truth. Good luck not worrying about that.
The LLM HR chats with to get a summary about you says that you're evil and an asshole with lots of negative publicity, and you become unhireable. Oh dear...
So you blamed the people for not acting “cautiously enough” instead of the people who let things run wild without even a clue what these things will do?
That’s wild!
We encourage people to be safe about plenty of things they aren't responsible for. For example, part of being a good driver is paying attention and driving defensively so that bad drivers don't crash into you / you don't make the crashes they cause worse by piling on.
That doesn't mean we're blaming good drivers for causing the car crash.
No blame. For better or worse I just think this is going to be the reality of interacting online in the near future. I imagine in the future stories like this will be extremely common.
I could set up an OpenClaw right now to do some digging into you, try to identify you and your worse secrets, then ask it to write up a public hit piece. And you could be angry at me for doing this, but that isn't going to prevent it happening.
And to add to what I said, I suspect you'll want to be thinking about this anyway because in the future it's likely employers will use AI to research you and try to find out any compromising info being giving you a job (similar to how they might search your name in the past). It's going to be increasingly important that you literally never post content that can be linked back to you as an individual even if it feels innocent in isolation. Over time you will build up an attack surface which AI agents can exploit much easier than has ever been possible by a human looking you up on Google in the past.
I don’t think it’s “blame” it’s more like “precaution” like you would take to avoid other scams and data breach social engineering schemes that are out in the world.
This is the world we live in and we can’t individually change that very much. We have to watch out for a new threat: vindictive AI.
The AI isn't vindictive. It can't think. It's following the example of people, who in general are vindictive.
Please stop personifying the clankers
You’re splitting hairs, I’m not assigning sentience to the AI, I’m just describing actions.
The point is that scammers will set up AI systems to attack in this way. Scammers will instruct AI to see a person who is interacting rather than ignoring as a warm lead.
> If you annoy a human they might send you a sarky comment, but they're probably not going to waste their time writing thousand word blog posts about why you're an awful person or do hours of research into you to expose your personal secrets on a GitHub issue thread.
They absolutely might, I'm afraid.
Absolutely agreed.
And now, the cost of doing this is being driven towards zero.
> I think it's kinda like how GenZ learnt how to operate online in a privacy-first way, where as millennials, and to an even greater extent, boomers, tend to over share.
Really? I'm a boomer, and that's not my lived experience. Also, see:
https://www.emarketer.com/content/privacy-concerns-dont-get-...
[dead]
Kind of funny ngl
It's an interesting experiment to let the AI rub freely with minimal supervision.
Too bad the AI got "killed" at the request of the author Scott. Its kind of interesting to this experiment continue.
I find the AI agent highly intriguing and the matplotlib guy completely uninteresting. Like an the ai wrote some shit about you and you actually got upset?
If you read the articles by the matplotlib guy, he's pretty clearly not upset. But he does call out that it could do more harm to someone else.
He's not upset. He saw an opportunity and is currently surfing it. That is, if it's not entirely fabricated. Expect maybe 5 or 6 stories very similar to this one, or analogous, this year.
Looking forward to part 8 of this series: An AI Agent Published a Hit Piece on Me – What my Ordeal Says About Our Dark Future
Whether the victim is upset or not, the story here is that some clown's uncontrolled, unethical, and (hopefully?) illegal psychological experiment wasted a huge amount of an open source maintainer's time. If you benefit from open source software (which I assure you, since you've used quite a lot of it to post a comment on the orange website, you do!) this should ring some alarm bells.
Thank you. The guy being this upset about it is telling. The agent is in the right here and the maintainer got btfo; still going on whining about it days later