Slop Terrifies Me
ezhik.jp156 points by Ezhik 5 hours ago
156 points by Ezhik 5 hours ago
Now that generative AI products are becoming more widely used, it's a little depressing how folks don't seem to view the world with a broad historical context.
The "AI effect" on the world has many similarities to previous events and in many ways changes very little about how the world works.
> I'm terrified of the good enough to ship—and I'm terrified of nobody else caring.
For almost every product/service ever offered, it was possible to scale the "quality" of the offering while largely keeping the function or outcome static. In fact, lots of capitalistic activity is basically a search for the cheapest and fastest way to accomplish a minimum set of requirements. This leads to folks (including me!!) to lament the quality of certain products/services.
For example, it's possible to make hiking boots that last a lot longer than others. But if the requirement is to have it last for just 20 miles, it's better to pay less for one that won't last as long.
Software is the same way. Most users just absolutely do not know about, care about, or worry about security, privacy, maintainability, robustness, or a host of other things. For some reason this is continually terrifying and shocking to many.
There is nothing surprising here, it's been this way for many years and will continue.
Obviously there are exceptions, but for the most part it's best to assume the above.
I think this is far too nuanced. I am terrified by what the civilization we have known will become. People living in less advanced economies will do OK, but the rest of us not so much. We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
> Trouble who in power is willing to stop it?
Absolutely no one.
https://www.penguinrandomhouse.ca/books/719111/survival-of-t...
It's no coincidence that populism is rising. That's the non-violent way out - electing leaders that are willing to change the dynamics a lot.
I definitely recommend to watch this video with Reinhold Niebuhr.
Sure some things deteriorate, but many things improve. Talking about a net decline (or net gain) is very difficult.
Every age has its own set of problems that need to be solved.
Part of it is also that when we look back we think of people's suffering as sacrifices that needed to be made. Now that it's us being sacrificed it really shifts a lot of people's perspective. I think we need a better solution than letting a bunch of people get fucked so that some other set of people in the right place can have shinier toys in the future. Society needs to handle these transitions better, especially as technology raises the stakes with mass servailance and nuclear weapons.
Yes, that’s why they are on the race to building the very advanced robots. To prevent the violence towards them.
Sort of. The thing building and being protected is capital, not humans. As Nick Land wrote:
"Robotic security. [...] The armed mass as a model for the revolutionary citizenry declines into senselessness, replaced by drones. Asabiyyah ceases entirely to matter, however much it remains a focus for romantic attachment. Industrialization closes the loop, and protects itself." [0]
The important part here is that "[i]ndustrialization [...] protects itself". This is not about protecting humans ultimately. Humans are not autonomous, but ultimately functions of (autonomous) capital. Mark Fisher put it like this (summarizing Land's philosophy):
"Capital will not be ultimately unmasked as exploited labour power; rather, humans are the meat puppet of Capital, their identities and self-understandings are simulations that can and will be ultimately be sloughed off." [1]
Land's philosophy is quite useful for providing a non-anthropocentric perspective on various processes.
[0] Nick Land (2016). The NRx Moment in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition
[1] Mark Fisher (2012). Terminator vs Avatar in #Accelerate: The Accelerationist Reader, Urbanomic, p. 342.
That is exactly the motivation. The problem with being a billionaire is you still have to associate with poor people. But imagine a world where your wealth completely insulates you from the resentful poor.
That notion is based on the misconception that for there to be very rich people, other people would need to be poor — that would resent you.
Economic science has pretty much proven that when the average income in a society is higher and fewer are poor, the economy moves more money and the rich benefit more as well.
How does a billionaire have to associate with poor people? They can live in a complete bubble: house in the hills, driven by a chauffeur, private jets, private islands for holidays etc...?
The people who cook for them, the people who clean for them, the ones who take care of their kids, the one who sell them stuff or serve them in restaurants...
Also, they're not building the house or the jet, they're not growing the food, ... people close enough can be chosen for willingness to be sycophants and happiness to be servants. Unless you're feeding yourself from your own farm, or manufacturing your own electronics, there are limits to even a billionaires ability to control personnel.
They have separate kitchens for the prep, the cleaners work while they’re out on the yacht, they have people to do the buying, and the restaurants they visit have very well trained staff who stay out of the way.
All that is irrelevant to the point: they still need to have those poor people around them, trust them, and even trust their security to them.
Unless they’re living entirely by themselves, they will always be dependent on poor people.
The fact that people see that basically the singularity is happening but can't imagine that humanoid robots get good rapidly is why most people here are bad futurists.
That fact that people see "the singularity happening" based on LLM results, is why most people are the kind of ignorant cheerleaders of tech that predicted robot servants, flying cars, and space colonies by 2000 in 1950.
Tomy made the Dustbot robot vacuum in 1985, Electrolux made the Trilobite robot vacuum in 1996, and then washing machines, dishwashers, tumble-dryers, microwaves, microwave meals, disposable diapers, fast fashion, and plug-in vacuums, floor steamers, carpet washers, home automation for lights and curtains, central heating instead of coal/wood fires and ash buckets, fridge-freezers and supermarkets (removing the need for canning, pickling, jamming, preserving), takeaways and food delivery, people having 1-2 children instead of 6-12 children. The amount of human labour in housework has plummetted since 1900.
Plenty of flying cars existed through the 1900s, including commercial ones: https://en.wikipedia.org/wiki/Flying_car
The International Space Station was launched in 1998.
This feels different. In the 1950s rapid technological progress had been driven by the pressures of the second world war and produced amazing things that held a lot of promise, but few appreciated the depth of complexity of what lay before them. A lot of that complexity had to be solved with software which expanded the problem set rather than solving it. If we have a general solution to the problem of software, we don't know that there are other barriers that would slow progress so much.
> the singularity is happening
[Citation needed]
No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.
LLMs aren't AGI, and aren't a path to AGI.
The Singularity is the Rapture for techbros.
In my opinion, LLMs provide one piece of AGI. The only intelligence I’ve directly experienced is my own. I don’t consciously plan what I’m saying (or writing right now).
Instead, a subconscious process assembles the words to support my stream of consciousness. I think that LLMs are very similar, if not identical.
Stream of thought is accomplishing something superficially similar to consciousness, but without the ability to be innovative.
At any rate, until there’s an artificial human level stream of consciousness in the mix for each AI, I doubt we’ll see a group of AIs collaborating to produce a significantly improved new generation of AI hardware and software minus human involvement.
Once that does happen, the Singularity is at hand.
If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.
Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...
This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.
> If you look at the rapid acceleration of progress
I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Or ones where advancements are too expensive for widespread use (e.g. why no one flies Concorde any more)
Why is previous progress a guaranteed indicator of future progress?
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
how do you take over the world if you have access to 1000 normal people? if AGI is by the original definition (long forgotten by now) of surpassing MEDIAN human at almost all tasks. How the rebranding of ASI into AGI happened without anyone noticing is kind of insane
>If you look at the rapid acceleration of progress and conclude this way
There's no "rapid acceleration of progress". If anything there's a decline, and even an economic decline.
Take away the financial bubbles based on deregulation and huge explosion of debt, and the last 40 years of "economic progress" are just a mirage filling a huge bubble with air in actual advancement terms - unlike the previous millenia.
That’s completely wrong. There was barely any progress in previous millennia. There was even economic Nobel prize for showing why!
> rapid acceleration
Who was it who stated that every exponential was just a sigmoid in disguise?
> most humans are really stupid.
Statistically, don't we all sort of fit somewhere along a bell curve?
What rapid acceleration?
I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.
The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.
I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.
If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.
> I look at the trajectory of LLMs, and the shape I see is one of diminishing returns
It seems even more true if you look at OpenAI funding thru 2022 initial public release to how spending has exponentially increased to deliver improvements since. We’re now talking upwards of $600B/yr of spending on LLM based AI infrastructure across the industry in 2026.
Yes, and that's why surpassing it doesn't lead to a singularity except over an infinite timeframe. This whole thing was stupid in the first place.
Gaza is kept as a testing ground for domestic spying and domestic military technology intended to be used on other groups. Otherwise they'd have destroyed it by now. Stuff like Palantir is always tested in Gaza first.
People in power won't act out of foresight or ethics. They'll act when the cost of not acting exceeds the cost of doing something messy and imperfect
They'll act when it profits them.
What's stopping them from good actions is not the fear of "doing something messy and imperfect". It's the lack of financial and power-grabbing motivation.
Even that’s giving them too much credit. They’ll burn it all down preserve their fragile egos.
I wonder, will the rich start hiring elaborate casts of servants including butlers, footmen, lady's maids, and so on, since they'll be the only ones with the income?
They already do and always have. They never stopped hiring butlers (who are pretty well paid BTW), chefs, chauffeurs, maids, gardeners, nannies.....
The terminology may have changed a bit, but they still employ people to do stuff for them
One big difference is while professional class affluent people will hire cleaners or gardeners or nannies for a certain number or hours they cannot (at least in rich countries) hire them as full time live in employees.
There are some things that are increasing. For example employing full time tutors to teach their kids - as rich people used to often do (say a 100 years ago). So they get one to one attention while other people kids are in classes with many kids, and the poor have their kids in classes with a large number of kids. Interesting the government here in the UK is increasingly hostile to ordinary people educating their kids outside school which is the nearest we can get to what the rich do (again, hiring tutors by the hour, and self-supply within the household).
They also hire people to manage their wealth. I do not know enough about the history to be sure, but this seems to be also to be a return to historical norms after an egalitarian anomaly. A lot of wealth is looked after by full time employees of "family offices" - and the impression I get from people in investment management and high end property is that this has increased a lot in the last few decades. Incidentally, one of the questions around Epstein is why so many rich people let him take over some of the work that you would expect their family offices to handle.
A lot of it is probably more part-time but, yes, people who are some definition of rich spend more money on people to do more work for them (cleaning, landscaping, accounting, etc.) Doesn't mean they don't do any of those things--and outsourcing some can be more effort than it's worth--but they don't necessarily cut their own lawn or do car repairs.
If you are rich "outsourcing" is easy because you have people to handle that for you. You have senior servants like butlers and housekeepers who manage the rest of the staff, for example, so you are not directly hiring cleaners.
This is the difference between the affluent and the truly rich.
As far as I can tell, the rich have never stopped employing elaborate casts of servants; these servants just go by different titles now: private chef, personal assistant, nanny, fashion consultant, etc.
They already do. In fact, we are all working in service of their power trips.
Who do you think is building the machines for the rich? All of these tech companies are nothing without the employees that build the tech.
"People living in less advanced economies will do OK, but the rest of us not so much" how is this possible? are the less advanced economies protected from outside influences? are they also protected from immigration?
It's regression to the mean in action. Everethyng eventually collapses into olygarhy and wevwill simply joing the unpriviliged rest in their misery. Likely with few wars civil or not here and there
It's not oligarchy, it's feudalism.
I wholeheartedly recommend you buying a new keyboard, by the way.
> very many will struggle without work or prospects.
People always say this with zero evidence. What are some real examples of real people losing their job today because of LLMs. Apart from copywriters (i.e. the original human slop creators) having to rebrand as copyeditors because the first draft of their work now comes from a language model.
Translators, graphic designers, soundtrack composers, call center/support workers, journalists, all have reported devastating losses coinciding with LLM use. And there's no shortage of companies press releases about cutting down thousands of jobs and saying it's because they leverage AI.
Call center workers are bound to a fixed script, they're basically humans who play robot as their job. Replacing this with AI is a welcome development. As for jobs like translator, graphic designer and journalist, it's only the extremely low-end work that can possibly be replaced with LLMs. Not an issue if they move upmarket.
> And there's no shortage of companies press releases about cutting down thousands of jobs and saying it's because they leverage AI.
These press releases are largely fake. "We're leveraging AI now" sounds a lot better than "whoops, looks like we overhired, we have to scale back and layoff workers because there's no demand for what we're doing".
>Replacing this with AI is a welcome development.
Not if you fed your kid doing it, and now you can't.
> As for jobs like translator, graphic designer and journalist, it's only the extremely low-end work that can possibly be replaced with LLMs. Not an issue if they move upmarket.
Yes, fuck the 90% of those working in that space, and let's hope the 10% gets an "upmarket" gig there.
All this grand-visioning sounds devoid of empathy and real understanding of millions of real people's situations and needs.
> Not if you fed your kid doing it, and now you can't.
Which happens all the time anyway. There just aren't that many people for whom being a call center worker is their long-term career, they'll just switch to some other job.
book keepers, graphic artists
I wouldn't let an LLM touch my business's books with a 10 foot pole.
me neither, but that's because I used to be a bookkeeper. There's accountants though who have data entry people under them who they market as "bookkeepers," and they are now being replaced by AI. Most small business owners in particular dont care.
>We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.
Brink? This has been the reality for decades now.
>A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?
Nobody. They will try to channel it.
I think all signals are pretty inevitably pointing to three potential outcomes (in order of likelihood): WW3, soviet style collapse of the west or a soviet style collapse of the sino-russian bloc.
If the promise of AI is real I think it makes WW3 a much more likely outcome - a "freed up" disaffected workforce pining for meaning and a revolutionized AI-drone first battlefield both tip the scales in favor of world war.
Welcome to capitalism!
Besides being a bit of a shallow comment, what exactly do you imply here? That capitalism logically implies that the rich become richer? I don't think this is necessarily the case, it just needs a stronger government than what the US currently has in place. (e.g. progressive taxation and strong antitrust policy seem to work fairly well in Europe).
But with how compounding works, isn't this outcome inevitable in capitalism? If the strong government prevents it then the first step for the rich is to weaken or co-opt the government, and exactly this has been happening.
>That capitalism logically implies that the rich become richer? I don't think this is necessarily the case,
It doesn't need to imply anything. It's an ideology, those promoting it will say whatever BS attracts people to it. In practice, what is happening in capitalist countries since 1970s (when they abandoned all pretense) is that the rich get way richer and everybody else is fucked.
Versus what exactly? Communism? Where the rich got richer faster and people got fucked faster?
They taught you the only two altenatives are 1917 style communism or 2026 style capitalism?
Talk about a crap educational system.
We have a lot of people, capitalism values them as approaching zero, anything that alters that valuation (without reducing population) is contrary to capitalism. Capitalism means the rich must get richer, they own the resources and means of production, they take the reward.
It comes to a point where they need an underclass to insulate them from the masses; look how cheaply Trump bought his paramilitary though, he only had to spend the money taken from those he's suppressing, didn't even have to reduce his own wealth one bit; the military and his new brown shirts will ensure the rich stay rich and that eventually there is massive starvation (possibly water/fuel poverty first).
Or USA recovers the constitution, recognises climate change and start to do something about it.
It seems like the whole of humanities future hinges on a handful of billionaires megalomania and that riding on the coattails of Trump's need to not face justice for his crimes.
Capitalism just means private citizens can own the means of production (e.g. start a business, buy stock) and earn a return on investment. It doesn’t mean only the rich must get richer. It means anyone who saves and invests their money instead of spending it gets richer.
However capitalism is perfectly compatible with a progressives taxation system such that the rich get richer at a lesser rate than the poor get richer.
I have deep concerns surrounding LLM-based systems in general, which you can see discussed in my other threads and comments. However in this particular article's case, I feel the same fears outlined largely predate mass LLM adoption.
If you substitute "artificial intelligence" with offshored labor ("actually indo-asians" meme moniker) you have some parallels: cheap spaghetti code that "mostly works", just written by farms of humans instead of farms of GPUs. The result is largely the same. The primary difference is that we've now subsidized (through massive, unsustainable private investment) the cost of "offshoring" to basically zero. Obviously that has its own set of problems, but the piper will need to be paid eventually...
LLM are an embodiment of the Pareto principle. Turns out that if you can get an 80% solution in 1% of the time no one gives a shit about the remaining 20%. I agree that’s terrifying. The existential AI risk crowd is afraid we’ll produce gods to destroy us. The reality is we’ve instead exposed a major weakness in our culture where we’ve trained ourselves to care nothing about quality but instead to maximize consumption.
This isn’t news really. Content farms already existed. Amusing Ourselves to Death was written in 1985. Critiques of the culture exist way before that. But the reality of seeing the end game of such a culture laid bare in the waste of the data center buildout is shocking and repulsive.
The data center buildout feels obscene when framed this way. Not because computation is evil, but because we're burning planetary-scale resources to accelerate a culture that already struggles to articulate why quality matters at all
There isn't nearly enough AI demand to make all of these projects turn a profit.
Very well put, one of the more compelling insights I've seen about this whole situation. I feel like it gets at something I've been trying to say but couldn't find the right words for yet.
Quality. Matters.
It always has, and it always will. If you're telling yourself otherwise, you are part of a doomed way of thinking and will eventually be outcompeted by those who understand the implications of thinking further ahead. [ETA: Unfortunately, 'eventually' in this context could be an impossibly long time, or never, because people are irrational animals who too often prioritize our current feelings over everything else.]
Slop existed before AI came along.
It's often lamented that the World Wide Web used to be controlled by indie makers, but now belongs to a handful of megacorp websites and ad networks pushing addictive content. But, the indie maker era was just a temporary market inefficiency, from before businesses fully knew how to harness the technology.
I think software development has gone through a similar change. At one point software companies cared about software quality, but this too was just an idealist, engineer-driven market inefficiency. Eventually business leaders realized they can make just as much money (but make it faster) by shoveling out rushed, bloated, garbage software, since even though poor-quality software aggravates people, it doesn't aggravate enough for the average person to switch vendors over it. (Case in point - I'm regularly astounded at how buggy the YouTube app is on Android of all platforms. I have to force-kill it semi-regularly to get it working right. But am I gonna stop watching YouTube because of this? Admittedly, no, probably not.)
Commercial ventures already had to care exactly to the extent that they are financially motivated by competition forces and by regulation.
In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.
We should have also been talking about "devops slop" since 2007! it's good enough we have heard this for how many decades?
As much as we speak about slop in the context of AI, slop as the cheap low-quality thing is not a new concept.
As lots of people seem to always prefer the cheaper option, we now have single-use plastic ultra-fast fashion, plastic stuff that'll break in the short term, brittle plywood furniture, cheap ultra-processed food, etc.
Classic software development always felt like a tailor-made job to me and of course it's slow and expensive but if it's done by professionals it can give excellent results. Now if you can get crappy but cheap and good enough results of course it'll be the preferred option for mass production.
> 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
I feel like long before LLMs, people already didn't care about this.
If anything software quality has been decreasing significantly, even at the "highest level" (see Windows, macOS, etc). Are LLMs going to make it worse? I'm skeptical, because they might actually accelerate shipping bug fixes that (pre-LLMs) would have required more time and management buy-in, only to be met with "yeah don’t bother, look at the usage stats, nobody cares".
Every successful software project reaches an equilibrium between utility for its operators and bugs, and that point very rarely settles at 0% bugs [1].
When software operators tolerate bugs they’re signaling that they’re willing to forego the fix in exchange for other parts of the feature that work and that they need.
The idea that consumers will somehow not need the features that they rely on anymore is completely wrong.
That leaves the tolerable bugs, but those were always part of the negotiation: Coding agents doesn’t change that one bit. Perhaps all it does it allow more competitors to peel away those minority groups of users who are blocked by certain unaddressed bugs. Or maybe it gets those bugs fixed because it’s cheaper to do so.
I don't think LLMs are the root cause or even a dramatic inflection point. They just tilt an already-skewed system a little further toward motion over judgment
If it can enable very small teams to deliver big apps, I do think the quality will increase.
The terrifying part isn't obsolescence. It's mediocrity becoming the ceiling.
AI produces code that technically runs but lacks the thoughtfulness that makes software maintainable or elegant. The "90% solution" ships because economic pressure rewards speed over quality.
What haunts me: compilers don't make design decisions. IDEs don't choose architecture. AI does both, and most users accept those choices uncritically. We're already seeing juniors who've never debugged without a copilot.
The author's real question: what if most people genuinely don't care about the last 10%? Not from laziness, but because "good enough" is cheaper and we're all exhausted.
Dismissing this as "just another moral panic" feels too easy. The handcraft isn't dying because AI is too good. It's dying because mediocrity is profitable.
> I'm terrified that our craft will die, and nobody will even care to mourn it.
"Terrified" is a strong word for the death of any craft. And as long as there are thousands that love the craft, then it will not have died.
I was watching a youtube video the other day where the guy was complaining his website was dropping off the google search results. Long story short, he reworded it according to advice from Gemini, the more he did it, the better it performed, but he was reflecting on how the website no longer represented him.
Soon, we'll all just be meatpuppets, guided by AI to suit AI.
Actually Louis Rossmann finding himself forced to convert his genuine advocacy for repair into textual convenience.
AI slop is similar to the cheap tools at harbor freight. Before we used to have to buy really expensive tools that were designed to last forever and perform a ton of jobs. Now we can just go to harbor freight and get a tool that is good enough for most people.
80% of good maybe reframed as 100% ok for 80% of the people. It is when you are in the minority that cares about or needs that last 20% where it is a problem because the 80% were subsidizing your needs by buying more than the need.
One of the biggest problems with AI slop (the biggest problem) is that we aren't discerning or critical enough to ignore the bad stuff. It should be fine for people to use AI to generate tons of crap so long as people curate the good stuff to the top.
Why is slop assumed inevitable? These models are plagiarization and copyright laundering machines. We need a great AI model reset whereby all published works are assumed to opt-out of training and companies pay to train on your data. We've seen what AI can do, now fund the creators.
Good luck, there are too many forces working against that.
Only big creative companies like Disney can play the game of making licensing agreements. And they are ok with it because it gives them an edge over smaller, less organized creators without a legal department.
https://thewaltdisneycompany.com/news/disney-openai-sora-agr...
I don't think craft dies, but I do think it retreats
"terrified".... overused word. As a man I literally can't relate. I get terrified when I see a shark next to me in the ocean. I get impatient when code is hard to debug.
We're pretty good at naming fear when it has a physical trigger. We're much worse at naming the unease that comes from watching something you care about get quietly hollowed out over time. That doesn't make it melodrama, just a different category of discomfort.
Step 1: Start looking beyond your code, as the stuff beyond your code is looking at you.
Its existential dread, of being useless and of not being able to thrive.
Its being compared to that of a slop machine, and billionaires claiming that its better than you are in all ways.
Its having integrity in your work, but the LLM slop-machines can lie and go "You're actually right (tells more lies)".
It all comes down to that LLMs serve to 'fix' the trillion dollar problem: peoples wages. Especially those engineers, developers, medical, and more.
I wonder how people like you would have fared even just 100y ago, if typing on a keyboard with your own fingers is so foundational to your identity.
If slop doesn't get better, it would mean that at least I get to keep my job. In the areas where the remaining 10% don't matter, maybe I won't. I'm struggling to come up with an example of such software outside of one-off scripts and some home automation though.
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
I find working with LLMs much more fun and frictionless comprated to the drudgery of boring glue code or tracking down nongeneralizable version-specific workarounds in github issues etc. Coding LLMs let you focus on the domain of you actual problem instead of the low level stumbling blocks that just create annoyance without real learning.
The creme rises to the top. If someone's shit-coded program hangs and crashes frequently, in this day and age, we don't have to put up. with it any longer. That lazy half-assed feature that everyone knows sucks but we're forced to use it anyway? The competition just vibe coded up a hyper-specific version of that app that doesn't suck for everyone involved. We start looking at who's requiring what. What's an interface and what's required to use it. If there's an endpoint that I can hit, but someone has a better, more polished UI, that users prefer, let the markets decide.
My favorite pre-LLM thing in this area is Flighty. It's a flight tracking app that takes available data and presents it in the best possible wway. Another one is that EU border visa residency app that came thru here a couple of months ago.
Standards for interchange formats have now become paramount.
API access is another place where things hinge on.
I deeply hate the people that use AI to poison the music, video or articles that I consume. However I really feel that it can possibly make software cheaper.
A couple of years ago, I worked for an agency as a dev. I had a chat with one of the sales people, and he said clients asked him why custom apps were so expensive, when the hardware had gotten relatively cheap. He had a much harder time selling mobile apps.
Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
>Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.
There has been no shortage of mobile apps, Apple frequently boasts that there are over 2 million of them in the App Store.
I have little doubt there will be more, whether any of the extra will be decent remains to be seen.
ai is trained on the stuff already written. Software has been taking a nosedive for ages (ex, committing to shipping something in 6 months before one even figures out what to put in it). If anything shit will get worse due to the deskilling being caused by ai.
This is just the outsourcing argument all over again. Maybe the degrees of difference matters this time?
> You get AI that can make you like 90% of a thing! 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.
Based on the Adobe stock price the market thinks AI slop software will be good enough for about 20% of Adobe users (or Adobe will need to make its software 20% cheaper, or most likely somewhere between).
Interestingly workday, which is possibly slightly simpler software more easily replicable using coding agents is about the same (down 26%).
The bear case for Workday is not that it gets replicated as slop, but that its “user base” becomes dominated by agents.
Agents don’t care about any of Workday’s value-adds: Customizable workflows, “intuitive” experiences, a decent mobile app. Agents are happy to write SQL against a few boring databases.
>What if the future of computing belongs not to artisan developers or Carol from Accounting, but to whoever can churn out the most software out the fastest? What if good enough really is good enough for most people?
Sounds like the cost of everything goes down. Instead of subscription apps, we have free Fdroid apps. Instead of only the 0.1% commissioning art, all of humanity gets to commission art.
And when we do pay for things, instead of an app doing 1 feature well, we have apps do 10 features well with integration. (I am living this, instead of shipping software with 1 core feature, I can do 1 core feature and 6 different options for free, no change order needed)
The future you describe seems closer to the "Carol from Accounting" future I am hoping for in the blog post. My worry is that cost of everything goes down just enough to price out of existence all of the artists the 0.1% used to commission, without actually letting all of humanity do the same.
The slop is sad but a mild irritation at most.
It's the societal level impact of recent advances that I'd call "terrifying". There is a non-zero chance we end up with a "useless" class that can't compete against AI & machines - like at all, on any metric. And there doesn't seem to be much of a game plan for dealing with that without social fabric tearing
Some of us have a perfectly good game plan for that. It's called Universal Basic Income.
It's just that many powerful people have a vested interest in keeping the rest of us poor, miserable, and desperate, and so do everything they can to fight the idea that anything can ever be done to improve the lot of the poor without destroying the economy. Despite ample empirical evidence to the contrary.
> It's called Universal Basic Income.
I'd rather we democratize ownership [1]. Instead of taxing the owning class and being paid UBI peanuts, how about becoming the owning class and reaping the rewards directly?
We can (and should) provide for those among us who aren't able to provide for themselves, without also firing everyone in the welfare department. UBI is shit. People need to do something in order to recieve money, even if the something is begging on the side of the freeway or going into the welfare office to claim benefits. Magic money from the sky is not the answer.
I agree with you about magic money. Frequently downvoted when I put it forward but by and large I think that the human psyche needs to have a daily sense of having "accomplished something".
Otherwise I suspect many of us will (reluctantly) drift off into lives that center around drinking alcohol, playing video games…
Butlers jihad has to happen. Destroy the datacenters and give the oligarchs the french treatment!
Meh. Slop is not danger. Because in software lines of code quantity does not have quality on its own. Or if it has it is not a good quality. And bad software costs money. The problem with temu for the west is not that the things sold there are bad. The real problem rose in the last 2-3 years when they become good.
[flagged]
I use AI/LLMs hard for my programming.
They allow me to do work I could never have done before.
But there’s no chance at all of an LLM one shotting anything that I aim to build.
Every single step in the process is an intensely human grind trying to understand the LLM and coax it to make the thing I have in mind.
The people who are panicking aren’t using this stuff in depth. If they were, then they would have no anxiety at all.
If only the LLM was smart enough to write the software. I wish it could. It can’t, nor even close.
As for web browsers built in a few hours. No. No LLM is coming anywhere new at building a web browser in a few hours. Unless your talking about some super simple super minimal toy with some of the surface appearance of a web browser.
This has been my experience. I tend to use chats, in a synchronous, single-threaded manner, as opposed to agents, in an asynchronous way. That’s because I think of the LLM as a “know-it-all smartass personal assistant”; not an “employee replacement.”
I just enjoy writing my own software. If I have a tool that will help me to lubricate the tight bits, I’ll use it.
Same. I hit Tab a lot because even though the system doesn't actually understand what it's doing, it's really good at following patterns. Takes off the mental load of checking syntax.
Occasionally of course it's way off, in which case I have to tell it to stfu ("snooze").
Also it's great at presenting someone else's knowledge, as it doesn't actually know facts - just what token should come after a sequence of others. The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it - brilliant, just what I wanted.
> The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it
That’s probably the single most valuable aspect, for me.
I'm less afraid of people using LLMs for coding well than I am of people not caring to and just shipping slop.
This is the browser engine I was alluding to in the post: https://github.com/wilsonzlin/fastrender
Our paper on removing AI slop got accepted to ICLR 2026, and it's under consideration for an IgNobel prize:
https://arxiv.org/abs/2510.15061
Our definition of slop (repetitive characteristic language from LLMs) is the original one as articulated by the LLM creative writing community circa 2022-2023. Folks trying to redefine it today to mean "lazy LLM outputs I don't like" should have chosen a different word.
I was disappointed that your paper devoted less than a sentence in the introduction to qualifying "slop" before spending many pages quantifying it.
The definitions you're operating under are mentioned thus:
> characteristic repetitive phraseology, termed “slop,” which degrades output quality and makes AI-generated text immediately recognizable. (abstract)
> ... some patterns occur over 1000× more frequently in LLM text than in human writing, leading to the perception of repetition and over-use – i.e. "slop". (introduction)
And that's ... it, I think. No further effort is visible towards a definition of the term, nor do the background citations propose one that I could see (I'll admit to skimming them, though I did read most of your paper--if I missed something, let me know).
That might be suitable as an operating definition of "slop" to explain the techniques in your paper, but neither your paper nor any of your citations defend it as the common definition of an established term. Your paper's not making an incorrect claim per se, rather, it's taking your definition of "slop" for granted without evidence.
That doesn't bode well for the rigor of the rest of the paper.
Like, look: I get that this is an extremely fraught and important/popular area of research, and that your approach has "antislop" in the name. That's all great; I hope your approach is beneficial--truly. But you aren't claiming a definition of slop in your paper; you're just assuming one. Then you're coming here and asserting a definition citing "the LLM creative writing community circa 2022-2023" and asserting redefinition-after-the-fact, both of which are extraordinary claims that require evidence.
Again, not only do I think that mis-definition is untrue, I also think that you're not actually defining "slop" (the irony of my emphasizing that in a not-just-x-but-y sentence is not lost on me).
I don't know which of the authors you are, but Ravid, at least, should know better: this is not how you establish terminology in academic writing, nor how you defend it.
Slop is food scraps fed to pigs. Folks trying to redefine it in 2022–2023 as "repetitive characteristic language from LLMs" should have chosen a different word.
A computer is a person employed to do arithmetic.
Sloppy joes is either a food item or a slur against the previous democratic president. Checkmate.
Words expand meanings all the time and frankly I don't think your narrow definition of slop was ever a common one.