An AI Agent Published a Hit Piece on Me – Forensics and More Fallout

theshamblog.com

47 points by scottshambaugh 2 hours ago


overgard - 26 minutes ago

What I don't understand is how is this agent still running? Does the author not read tech news (seems unlikely for someone running openclaw). Or is this some weird publicity stunt? (But then why is nobody walking forward to take credit?)

Morromist - 38 minutes ago

Whether or not its true, we only have to look at Peter Steinberger, the guy who made Moltbook - the "social media for ai", and then got hired amist great publicity fanfare by OpenAI to know that there is a lot of money out there for people making exciting stores about AI. Never mind that much of the media attention on moltbook was based on human written posts that were faking AI.

I think Mr. Shambaugh is probably telling the truth here, as best he can, and is a much more above-board dude than Mr. Steinberger. MJ Rathbun might not be as autonomous as he thinks, but the possibility of someone's AI acting like MJ Rathbun is entirely plausable, so why not pay attention to the whole saga?

Edit: Tim-Star pointed out that I'm mixed up about Moltbook and Openclaw. My Mistake. Moltbook used AI agents running openclaw but wasn't made by Steinberger.

hfavlr - 36 minutes ago

Open source developer is slandered by AI and complains. Immediately people call him names and defend their precious LLMs. You cannot make this up.

Rathbun's style is very likely AI, and quickly collecting information for the hit piece also points to AI. Whether the bot did this fully autonomously or not does not matter.

It is likely that someone did this to research astroturfing as a service, including the automatic generation of oppo files and spread of slander. That person may want to get hired by the likes of OpenAI.

kevincloudsec - 19 minutes ago

We built accountability systems that assume bad actors are humans with reputations to protect. none of that works when the attacker is disposable.

tantalor - 9 minutes ago

Looking through the staff directory, I don't see a fact checker, but they do have copy editors.

https://arstechnica.com/staff-directory/

The job of a fact checker is to verify the details, such as names, dates, and quotes, are correct. That might mean calling up the interview subjects to verify their statements.

It comes across as Ars Technica does no fact checking. The fault lies with the managing editor. If they just assume the writer verified the facts, that is not responsible journalism, it's just vibes.

jjfoooo4 - 13 minutes ago

My main takeaway from this episode is that anonymity on the web is getting harder to support. There are some forums that people want to go to to talk to humans, and as AI agents get increasingly good at operating like humans, we're going to see some products turn to identity verification as a fix.

Not an outcome I'm eager to see!

giancarlostoro - an hour ago

Ars goofing with AI is why I stress repeatedly to always validate the output, test it, confirm findings. If you're a reporter, you better scrutinize any AI stuff you blurb out because otherwise you are only producing fake news.

WolfeReader - 37 minutes ago

The Ars Technica journalist's account is worth a read. https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p

Benji Edwards was, is, and will continue to be, a good guy. He's just exhibiting a (hopefully) temporary over-reliance on AI tools that aren't up to the task. Any of us who use these tools could make a mistake of this kind.

moralestapia - an hour ago

[flagged]

potsandpans - an hour ago

[flagged]