AI slop is killing online communities

rmoff.net

258 points by thm 3 hours ago


carlgreene - 3 hours ago

I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.

I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.

Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.

Trusteando - 5 minutes ago

I have long thinked about the problem of identifying yourself, there is a 6000 lines protocol to change the way identify is verified.

agustechbro - 3 hours ago

I kind feel this might be good. Bot writen comments and AI media that can no longer be distinguish from real, will make us human leave the social networks, which helped to separate Us humans. Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings.

CrzyLngPwd - 2 hours ago

I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.

It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.

It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.

I fear losing the battle.

culebron21 - an hour ago

Sadly the imperative is, as often, a call to everyone to be good guy and make less noise. Unfortunately, it doesn't work, neither at personal level, nor at global.

One may be quiet, but what if your friend/acquaintance/fellow got possessed by some AI slot machine, and is sharing his "products" enthusiastically? I had such case, and right from the very beginning was dismissive and rude, and it doesn't work -- he keeps sharing various artifacts.

On a global level, yes communities die out. I think, global communication has reached the point when it's more a liability than a benefit. In late '90s and early '00s, maybe until early '10s, getting more connected could lead you to nice clients, getting hired etc. Nowadays, even before ChatGPT 3 in '22, every such area became overcrowded, underbidded, etc, and LLMs, surprisingly, added not much new -- just augmented this trend.

dwaltrip - 29 minutes ago

Like many modern woes, it’s a problem of trust.

The baseline level of trust in an online interaction has been eroded significantly by LLMs.

The question is, how can we reverse this trend and increase trust?

I have a sneaking suspicion that it would help enormously if the stock prices of the largest companies in the world were not tied to how effective they are at hijacking as much of humanity’s time and attention as possible.

Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.

Let’s empower people to effectively have more control over the content they interact with.

Social dynamics can make this difficult. We all want to be in the loop. The recent striking successes of the movement to ban phones in schools gives me hope.

motbus3 - an hour ago

The company I work for has a deep rooted community side and despite what big techs do, I am 100% confident the only aspects we have in community features are for the user benefit. No gray area. Just that.

Since the AI sloppification we lost considerable amount of traffic to bots. But worse than that, we lost users who tended to contribute back with others.

We can leverage multiple ways of exposing community data to members, so it is not that we are loss because of that, but more in the fact that we have 30y or so of good feedback on how the community around the platform was good for people and now everything is at risk...

Don't get me wrong, my work is work... There are premium features and else, but the amount of value one can get for free is what the platform is known for. And we know many people use it for free for years and when they need or can they subscribe and mostly stay for years and years.

The fact people are losing those connections is depressing to me

noahgolmant - 2 hours ago

There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.

> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.

I agree 100% with the novel contribution aspect. But there's some nuance there.

For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.

As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.

I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.

sixhobbits - 14 minutes ago

This kinda thing makes me sad that keybase sold out to zoom and wonder if it can be resurrected. It was such a simple web of trust that went viral enough that I still occasionally see it on HN or Twitter profiles even though it's been long dead.

There are maybe 20 or so online handles I know, some of whom I've met in person, who I deeply trust. To the extent that I fully trust anyone they vouch for too.

Even with just one degree, that's a large enough international semi anonymous online community that can provide value to each other through online text based communication. Doesn't need iris scans or credit card checks, just "patio11 on hn Twitter and whatever his domain is is one of the good uns" and a network effect from there.

Already seeing some form of this reputation staking in eg Pi PRs, everyone is treated as clanker slop by default but the entry bar remains quite low to prove and build reputation.

I don't think online communities will stay the same in the face of AI but I do think whatever comes next will strongly rhyme

Aeroi - 3 hours ago

You're absolutely right!

muldvarp - 15 minutes ago

Wasn't that obvious the second ChatGPT 3.5 released?

olup - 2 hours ago

I feel that a lot in my side projects: maybe one should keep the half-baked AI repo for oneself and rather share what the experiment, the thesis, and the learning from the building are. No one cares much about the (un)finished product, as it can be replicated better in most cases with a couple hours' work of claude coding.

For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.

pupppet - 2 hours ago

I want my future community apps and sites to build in bot a flagger. I don't care how hard it is, the community that gets this right is the one I'll jump ship to.

janice1999 - 2 hours ago

Question for web devs - are captchas effective any more? If Reddit required a captcha on every comment, would it actually decrease bot comments?

dwa3592 - 2 hours ago

When LLMs were new on the scene, I thought trust would fade in the written(text) medium. I saw it happening on Substack, Medium, and Reddit. But then VCs pumped so much money and AI has gotten into every other modality (audio, video). The only thing I really interact these days are the human beings sitting in front me, phone calls with people I know and hackernews. Life seems sorted but something feels missing as well.

Edit - I am not anti AI but it is slowly killing the digital human interaction.

originalvichy - an hour ago

Re: "The Asymmetry of Bullshit"

I'm gonna speak on behalf of language models' capability of making online communities better. In recent times, the frustrating forum phenomenon of "learned helplessness" is making me too annoyed to participate. Even in a fantastic subreddit as /r/LocalLLaMA, there are people posting replies in the vein of

> user1: please help me understand this acronym the post title speaks of > user2: (explains in detail what it means)

In the "good old days", a low effort, surface level question would result in someone either muting or banning the person to keep the discussion high quality.

There I am, browsing a forum dedicated to LLM enthusiasts, and an unbeliavable number of people are asking LMGTFY/RTFM-level questions they could even find an answer to from a free Google Search AI summary, and people are rewarding them by actually responding to them with effort.

Thanks to models being quite intelligent at answering basics, the ban-hammer should be used more swiftly if people keep polluting forums with low-quality posts. There's no need to feel bad for them not having the time or capabilities to read through years of forum posts to feel qualified to answer.

Maybe even these sloppy posts authors can be outright muted or banned with a heavier hand for the sake of quality.

throw7 - an hour ago

"Build with AI."

No, I don't think I will.

liminis - 2 hours ago

I'll remove the particulars to avoid anything partisan, but:

I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.

It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.

It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.

ianbutler - 3 hours ago

I made this point elsewhere, but people are learning a lot of what us had to learn the old way which is no one cares about your stuff for the most part and now the value provided has to go way up to get people to care. That is, as the author says, the novelty has worn off and since we know it's AI the perceived value is also way down.

We're all recalibrating.

I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.

tailscaler2026 - 2 hours ago

Online communities that allow upvoting / downvoting have been effectively dead for a long time because it's easy to manipulate conversations by elevating and punishing comments to fit a narrative. This is especially true on HN.

rglover - 26 minutes ago

In a clip: https://youtu.be/WAZljmaRxE4?si=i1p4jn3zxgmQrKUk

OgsyedIE - 2 hours ago

It sucks that the narrative framing device of 'human slop' has vanished in the last year. Some subreddits, like all location subreddits, lifestyle subreddits like malefashionadvice and redscarepod and entry-level academic subreddits like math and criticaltheory were already just hives of human slop before AI came around because of a structural design to the site that had the side effect of normalising a total absence of quality control.

Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.

retinaros - 26 minutes ago

AI is lifting the voices of the lazys and below average to average people. for those who would never have progressed it might seem like a god given gift. for the ones with the desire to grow and learn and go beyond average... this is a curse.

CM30 - 2 hours ago

There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.

No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.

YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:

https://www.youtube.com/watch?v=UEfCTCBDKIU

And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?

The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...

If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.

mrkramer - 3 hours ago

The importance of good search engines and good discovery engines will grow even more.

AuthAuth - 34 minutes ago

AI Slop is killing the mainstream communities and the alternative communities are filled to the brim with tankies/nazi's (unironically).

geoffdouglas - 3 hours ago

This is a good thing. social media was already slop before AI. If this gets more intellectuals off these same websites daily and instead spend their time to better things, then I love AI slop’s purpose. There’s more to the internet than Reddit, TikTok, and youtube. Really there is, if your circle of friends is small or non existent without going to the same dotcoms, you have an issue that is worse than any AI slop tbh

spookymutation - 2 hours ago

I have been reading HN near-daily for years.

This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment. The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.

I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed. How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.

All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.

It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.

onlytue - 3 hours ago

HN is in peril and I don’t think it is a bad thing. Or rather, I’d like to bring back the old chestnut: it’s a good thing.

While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?

I look forward to this, I think it is an exciting development.

arjie - 2 hours ago

Human slop is realistically just as bad. In a strange twist, human commentary on the Internet is asymptotically approaching an older LLM. Trite cliches, repetitive tropes, and tribal affiliation signals dominate conversation.

I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.

zby - an hour ago

So no hope for https://xkcd.com/810/?

parliament32 - an hour ago

It's not a bad thing though. "Online communities" really, really suck nowadays. It's honestly for the best that they suffocate under their own pile of slop. HN is at least bearable since AI-generated comments are forbidden here -- but I wish we'd also ban AI-related submissions.

01284a7e - 3 hours ago

How would one build an online community free of LLM agents commenters and links to "slop" content?

Strict invitation trees? Small signup fees? No SEO incentives?

foxfired - 2 hours ago

For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.

They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.

There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.

josefritzishere - 3 hours ago

The writing here is good. Quote of the day "Any fool can feed coins into a fruit machine and pull the arm."

RobRivera - an hour ago

Welcome to the club

whatever120 - an hour ago

This post is slop.

dfxm12 - 2 hours ago

AI slop is hurting my community in a different way. We have an internal viva engage community for quick development how to type questions at work. More frequently, instead of asking "how to" questions to the crowd to crowdsource answers, people are reaching out to me directly to ask me why the solution AI suggested doesn't work.

That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.

troupo - 2 hours ago

Related, from a couple of days ago: Knitting Bullshit https://katedaviesdesigns.com/2026/04/29/knitting-bullshit/

slopinthebag - 2 hours ago

There are "nice", "polite" slop enthusiasts. The ones who insist they have taste and tact. They would never post bad slop, recklessly, only the very highest-quality human-refined, curated slop. Not really slop at all, they would argue, because they gave it a careful review before posting it. They insist there's a very important difference between this premium slop and the nasty kind, and that low-quality human-authored media is actually slop, too, when you think about it. They talk about how important it is for people to use slop thoughtfully, efficiently, correctly, and that we all need to learn about and discuss slop constantly because it's the inevitable future and highly relevant for everyone.

They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.

If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?

59qlkjah - 2 hours ago

Sigh. First the article states that "coding by LLM is the way things are done right now" in 10 different ways but message boards and articles need to be protected.

We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.

So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.

You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.

phoronixrly - 3 hours ago

> AI slop is driving up the noise, and making the signal more and more difficult to discern in communities.

Thank you OP, this puts into words why I no longer look at Show HNs.

imadierich - 10 minutes ago

[dead]

bamboozled - 11 minutes ago

[dead]

trash3 - 2 hours ago

[dead]

animanoir - an hour ago

[dead]

WolfeReader - 3 hours ago

[dead]