/Deslop

tahigichigi.substack.com

16 points by yayitswei 4 hours ago


piker - 3 hours ago

Just don't use LLMs to generate text you want other humans to read. Think and then write. If it isn't worth your effort, it certainly isn't worth your audience's.

varjag - 3 hours ago

I would also point to a human-generated (and maintained) list:

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

fxwin - 3 hours ago

> The elephant in the room is that we’re all using AI to write but none of us wants to feel like we’re reading AI generated content.

My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably more typical for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.

The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text.

Along with the problems above, manual writing also serves as a sort of "proof-of-work" establishing credibility and meaning of an article - if you didn't bother taking the time to write it, why should i spend my time reading it?

stuaxo - 3 hours ago

This article itself feels LLM written.

Leynos - 3 hours ago

Please try and follow this advice, because there's nothing more annoying than some comic book guy wannabe moaning about AI tells while I'm trying to enjoy the discussion.

randomtoast - 2 hours ago

You just need to use this list as a prompt and instruct the LLM to avoid this kind of slop. If you want to be serious about it, you can even use some of these slop detectors and iterate through a loop until the top three detectors rate your text as "very likely human."

Der_Einzige - 3 hours ago

We wrote the paper on deslopping LLM and their outputs: https://arxiv.org/abs/2510.15061

cadamsdotcom - 2 hours ago

There’s a really cool technique Andrew Ng nicknamed reflection, where you take the AI output and feed it back in, asking the model to look at it - reflect on it - in light of some other information.

Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.

mold_aid - 2 hours ago

Just seems like the author could have said "write the damn thing yourself" and been done with it.