Horses: AI progress is steady. Human equivalence is sudden
andyljones.com572 points by pbui 2 days ago
572 points by pbui 2 days ago
I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.
I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity and so many of its outputs. I see it in the writing of leaders within VC firms and AI companies but I also see it in ordinary conversations on the caltrain or in coffee shops.
Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.
Hell. Hell on earth.
If you want another side data point, most people I know both in Japan and Canada use some sort of an AI as a replacement for any kind of query. Almost nobody in my circles are in tech or tech-adjacent circles.
So yeah, it’s just everyone collectively devaluing human interaction.
A more sophisticated replacement for search engines seems like one of the more positive use cases for chatbots to me.
What am I missing?
I love AI, but I'm exasperated by the extent to which my fiancee uses Claude instead of search, and for everything...
Why? Google search kinda sucks. And I find it helpful to provide context that I can't otherwise provide to any standard search engine.
Because the responses are often distilled down from the same garbage Google serves up, but presented as the opinion of Claude, whom she increasingly trusts.
I use Claude a lot. I have the most expensive Claude Max subscription both for my own consultancy and at client sites, separately. I'm increasingly close to an AI maximalist on many issues, so I'm not at all against extensive use of these models.
But it's not quick enough to of its own accord resort to verifying things before giving answers to be suitable as a general purpose replacement for Google unless you specifically prompt it to search.
Google search results: a dozen sponsored links; a dozen links to videos (which I never use -- I'd rather read than watch); six or seven pages with gamed SEOs; if you're lucky, what you actually want is far down near the end of the first page, or perhaps at the top of the second page; the other 700 pages of links are ... whatever. Repeat for our five times with variously tweaked queries, hoping that what you actually want will percolate up into the first or second page.
Claude: "Provide me links to <precise description of what you actually want". Result: 4 or 5 directly relevant links, most of which are useful, and it happens on the first query.
Claude is dramatically more efficient than Google Search.
> Claude: "Provide me links to <precise description of what you actually want". Result: 4 or 5 directly relevant links, most of which are useful, and it happens on the first query.
Which, as I pointed out, is not the point, as you're advocating exactly the kind of prompting I said wouldn't be a problem. It's not how she uses it.
> unless you specifically prompt it to search.
Ah, that's a good call-out. I don't use Claude aside from in Cursor; I use ChatGPT for normal queries and it's pretty good about doing searches when it doesn't think it knows the answer. Of course it'll search when prompted, but it'll often search without prompting too. I just mistakenly assumed that your fiancée's usage of Claude implied Claude was actually searching as well.
Google search sucks now because it's been targeted by the spammers and content farms. Before that happened it was pretty good. LLMs will eventually be poisoned the same way, whether by humans or other LLMs.
Garbage in, garbage out + chat bots will be monetized which means they will show you things their ad partners want you to see vs what you actually want.
frankly I've found even chat gpt free to be more useful when looking for something - I'd describe what I'm looking for, what are must-have features, what I definitely don't mean, etc and it'll suggest a few things. This has rarely led to not finding what I'm looking for. it's absolutely superior to Google search these days for things that have been around a while. I wouldn't check the news with it.
> I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity [...]
Who do they think will make their ventures profitable? Who do they think will take their dollars and provide goods and services in exchange?
If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.
> If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.
I don't think that's right. The owners will still own everything. If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
There's an old Isaac Asimov book with something similar: https://en.wikipedia.org/wiki/Foundation_universe#Solaria (though accomplished more peacefully and with less pain than I think is realistic).
What would they get from the plebs? Suppose we went through The Phools and so the plebs were exterminated, then what? Perhaps we'd finally have Star Trek economics, but only for them, the "owners". Better be an "owner", then.
That could be an incredibly dark Star Trek prequel/parody.
That the Earth of Star Trek is populated by only the descendants of the capital holders. Everyone else was directly or indirectly exterminated.
Then they created an egalitarian tech utopia for themselves.
> What would they get from the plebs?
I think the right question is: what would they want from the plebs? And the answer will be nothing.
Right now they want the plebs' labor, which is why things work the way they do.
> The Phools and so the plebs were exterminated, then what?
Is that a reference to this? https://press.princeton.edu/books/hardcover/9780691168319/ph...
> Perhaps we'd finally have Star Trek economics, but only for them, the "owners". Better be an "owner", then.
I don't think we'll have Star Trek economics, because that would be fundamentally fair and egalitarian and plentiful. There will still be resource constraints like energy production and raw materials. I think it will be more like B2B economics, international trade, with a small number relevant owners each controlling vast amounts of resources and productive capacity and occasionally trading basics amongst themselves. It could also end up like empires-at-war (which actually may be more likely, since war would give the owners something seemingly important to do, vs just building monuments to themselves and other types of jerking off).
The Phools was a reference to this: https://www.newyorker.com/magazine/1981/10/12/phools
Consider being a significant shareholder in the future as analogous to citizenship as it exists today. Non-owners will be persona non gratae, if they're allowed to live at all.
> If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
I think you might be a little behind on economic news, because that's already happening. And it's also rapidly reshaping business models and strategic thinking. The forces of capitalism are happily writing the lower and middle classes out of the narrative.
https://www.wsj.com/livecoverage/stock-market-today-dow-sp50...
>> If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
> I think you might be a little behind on economic news, because that's already happening. And it's also rapidly reshaping business models and strategic thinking. The forces of capitalism are happily writing the lower and middle classes out of the narrative.
No, that doesn't surprise me at all. I'm basically just applying the logic of capitalism and automation to a new technology, and the same thing has played out a thousand times before. The only difference with AI is that; unlike previous, more limited automation; it's likely there will be no roles for displaced workers to move into (just like when engines got good enough there were no roles for horses to move into).
It's important to remember that capitalism isn't about providing for people. It's about providing for people with wealth to exchange. That works OK when you have full employment and wealth gets spread around by paying workers, but if most jobs disappear due to automation there's no mechanism to spread wealth to the vast majority of people, so under capitalism they'll eventually die of want.
See also: Citigroup's plutonomy thesis[1] from 2006
tldr: the formal economy will shift to serving plutocrats instead of consumers, it's much more profitable to do so and there are diminishing returns serving the latter
[1] https://www.sourcewatch.org/images/b/bc/CITIGROUP-MARCH-5-20...
Those nerds can now develop an AI robot to make love to their wives while they get back to blogging about accelerationism with all the time they freed up.
Making predictions on how it will turn out VS designing how it should be. Up til now, powerful people needed lots and lots of other humans to sustain their power&life. Thus that dependency gave the masses leverage. Now I'd like a society we're everyone is valued for being human and stuff. With democracies we got quite far in that direction. Attempts to go even further... Let's just say "didn't work out". And right now, especially in the US, the societal system seems to go back to "power" instead rules.
Yeah, I see a bleak future ahead. Guess that's life, after all.
> didn't work out
In the "learn to love democracy and freedom" sense, sure, but in the economic sense? "Didn't work out" feels like a talking point stuck in 1991. Time has passed, China is the #2 economy in the world, #1 if you pick a metric that emphasizes material or looks to the future. How did they get there? By paying the private owners of our economy to sell our manufacturing base to them piece by piece -- which the private owners were both entitled and incentivized to do by the fundamental principles of capitalism. The ending hasn't been written, but it smells like the lead-up to a reversal in fortune.
As for our internal balance of power, we've been here before, and the timeline conveniently lines up to almost exactly 100 years ago. I'm hoping for another Roosevelt. It wasn't easy then, it won't be easy now, but I do think it's fundamentally possible.
This is the direct result of abandoning religion altogether and becoming a 100% secular society.
I am currently reading the Great Books of the Western World in order to maybe somehow find god somewhere in there, at least in a way that can be woven into my atheist-grown brain, and even after just one year of reading and learning, I can feel the merits.
Accepting Science as our new and only religion was a grave mistake.
why exactly do you need a deity to tell you to love your fellow man? Do you need god in your life to want to love your children? I think this is not quite right. I don't expect that the desire to create these tools independent of outcome in the valley is simply about greed , and for companies like anthropic, the ability to use AGI fear as a means to drive investment in themselves from VC class that lives the idea of obliterating human labor. We need less money in tech - we'll probably get it soon enough.
> why exactly do you need a deity to tell you to love your fellow man?
Because that is not a given, as shown by the entirety of human history. Without God, the only arguments for love, or what is right, is just what people think/feel/agree on at a certain time and place, which has a lot of variations and is definitely not universal.
> Do you need god in your life to want to love your children?
Most people don't need God to love their children, and the ones that don't might not be convinced otherwise by God.
That said, what do you do exactly for that love? Do you cheat and steal to guarantee their future over others? If not because of some "benefit to society" logical argument that would convince no-one, why would one even care about that and not exploit society for their own benefit?
Almost everyone loves themselves and their family above all others. Only God can tell you to love your neighbors and even your enemies.
There are still many societies around the world where most people are mostly self centered and you can see the results. You are taking for granted many values you have, as if you arrived to them logically and indipendently instead of learning them from your parents and a society that derived them from God for centuries.
Doesn't that only shift the question to what God wants you to do and in turn who interprets God's will?
Said another way, how would you conclude with any certainty that you are indeed following God's will with any action you take?
Are we completely ignoring the tonnes of awful things people have done in the name of their god? Belief in a higher power doesn't automatically make you good/bad. The same is true of the inverse.
>Without God, the only arguments for love, or what is right, is just what people think/feel/agree on at a certain time and place, which has a lot of variations and is definitely not universal.
Lets ignore that laws exist for a second....Does god say everybody in Manhattan should reserve the left side of the escalators for people walking up them, and the right should be left for people just standing and escalating? No, but somehow a majority of the population figured it out. Society still has rules, both spoken and unspoken, whether god is in the picture or not
If you are serious about these questions, read Dominion by Tom Holland. He makes a very long and thorough historical case that Christianity has contributed more good than bad over the centuries. (I don’t know what comparable works are for other religions.)
Just an empirical observation.
Decoupled from the social systems built by organized religion, our “elites” are taking society to a horrific place.
Could you build up traditions and social structures over time without any deity that would withstand the hedonism and nihilism driving modern culture? Perhaps. But it would require time measured in generations we don’t have.
Religion is toxic
The societies in the 20th century that banned it completely turned out to be even more fucked up.
Writing off an entire facet of life as toxic, is toxic.
Anything taken to extreme can be harmful, but some of the most grounded and successful (as in, living well) people I know are those with a self-aware religious foundation to lean on. People may bring up examples of religious cults as a reason to discard all religion, but surely the same could be said for the many secular cults. We shouldn't throw out the baby with the bathwater, as they say.
Not at all.
You can have morality without religion. Religion arguably makes it worse too.
At the very least , it takes generations to build up shared traditions and values across a society. If you want an atheistic version of that, you would need to start now and it’s going to take a long time to build.
We don't need religion for that, humanism exists a way of living for instance.
I don't think (most) people treat science "as a religion".
Some tech leaders seem to have swap Ayn Rand (who if you look at the early days definitely acted like a cult leader), to this AI doomer cult, and as a result seem to be acting terribly.
Religion was much wider spread in the 1800s, but that didn't stop industrialists acting terribly.
I don't think the theory holds water at all.
Humanism doesn’t have the kinds of social systems and traditions that people need to have shared values and morality.
I can't say I'm shocked. Disappointed, maybe, but it's hardly surprising to see the sociopathic nature in the people fighting tooth and nail for the validation of venture capitalists who will not be happy until they own every single cent on earth.
There are good people everywhere, but bring good and ethical stands in the way of making money, so most of the good people lose out in the end.
AI is the perfect technology for those who see people as complaining cogs in an economic machine. The current AI bubble is the first major advancement where these people go mask off; when people unapologetically started trying to replace basic art and culture with "efficient" machines, people started noticing.
"Hell on Earth" - I don't think there is a more succinct or accurate way to describe the current environment.
I think, like the Bill Gates haters who interpret him talking about reducing the rate of birth in Africa as wanting to kill Africans, you're interpreting it wrong.
The graph says horse ownership per person. People probably stopped buying horses, they let theirs retire (well, to be honest, probably also sent to the glue factory), and when they stopped buying new horses, horse breeding programs slowed down.
I wish the author had had the courage of their convictions to extend the analogy all the way to the glue factory. It’s what we are all thinking.
I have a modest proposal for dealing with future unemployment.
There are too many people in power right now who I wouldn't put it past to take that proposal seriously.
Sending all the useless horses to glue factories in that time was so prevalent it was a cartoon trope. The other trope being men living in flop houses, and towns having entire sections for unemployable people called skid row.
The AI people point to post 1950s style employment and say 'people recovered after industrial advance' and ignore the 1880s through the 1940s. We actually have zero idea if the buggy whip manufacturer ever recovered or just lasted a year in skid row before giving up completely, or lived through the 2 world wars spurred by mechanisation.
Horses were killed more often for meat that was used in dog food than for glue.
I did a deep research into the decline of horses and it was consistent with fewer births, not mass slaughter. The US Department of Agriculture has great records during this time, though they’re not fully digitized.
I don’t think you’re realizing that the OP understands this, and that in this analogy, the horses are human beings
In this analogy, horses are jobs, not humans; you could argue there's not much of a difference between the two, because people without jobs will starve, etc., but still, they're not the same.
Why make the analogy at all if not for the implied slaughter. It is a visceral reminder of our own brutal history. Of what humans do given the right set of circumstances.
One would argue in a capitalist society like ours, fucking with someone's job at industrial scale isn't awfully dissimilar from threatening their life, it's just less direct. Plenty more people currently are feeling the effects of worsening job markets than have been involved in a hostage situation, but the negative end results are still the same.
One would argue also if you don't see this, it's because you'd prefer not to.
If we had at least a somewhat functioning safety net, or UBI, or both, you'd at least have an argument to be made, but we don't. AI and it's associated companies' business model is, if not killing people, certainly attempting to make lots of lives worse at scale. I wouldn't work for one for all the money in the world.
UBI will not save you from economic irrelevance. The only difference between you and someone starving in a 3rd world slum is economic opportunity and the means to exchange what you have for what someone else needs. UBI is inflation in a wig and dark glasses.
There is, at least, a way to avoid people without jobs starving. Whether or not we'll do it is anyone's guess. I think I'll live to see UBI but I am perphaps an optimist.
You'd have to time something like UBI with us actually being able to replace the workforce -- The current LLM parlor tricks are simply not what they're sold to be, and if we rely on them too early we (humanity) is very much screwed.
population projections they already predict that prosperity reduces population
and even if AI becomes good enough to replace most humans the economic surplus does not disappear
it's a coordination problem
in many places on Earth social safety nets are pretty robust, and if AI helps to reduce cost of providing basic services then it won't be a problem to expand those safety nets
...
there's already a pretty serious anti-inequality (or at least anti-billionaire) storm brewing, the question is can it motivate the necessary structural changes or just fuels yet another dumb populist movement
I think the concerns with UBI are (1) it takes away the leverage of a labor force to organize and strike for better benefits or economic conditions, and (2) following the block grant model, can be a trojan horse "benefit" that sets the stage for effectively deleting systems of welfare support that have been historically resilient due to institutional support and being strongly identified with specific constituencies. When the benefit is abstracted away from a constituency it's easier to chop over time.
I don't exactly know how I feel about those, but I respect those criticisms. I think the grand synthesis is that UBI exists on top of existing safety nets.
Point (2) seems wrong intuitively. "Chopping" away UBI would be much more difficult _because_ it is not associated to a specific constituency.
Not only would there be more people on the streets protesting against real or perceived cuts;
there also would be fewer movements based on exclusivist ideologies protesting _in favour of cuts_*
* e.g. racist groups in favour of cutting some kinds of welfare because of racial associations
In practice there are a few strong local unions (NY teachers, ILA (eastern longshoremen)), but in general it doesn't help those who are no employed. (Also when was the last general strike that achieved something ... other than getting general strikes outlawed?)
... also, one pretty practical problem with UBI is that cost of living varies wildly. And if it depends on location then people would register in a high-CoL place and live in a low-CoL place. (Which is what remote work already should be doing, but many companies are resistant to change.)
In theory it makes sense to have easy to administer targeted interventions, because then there's a lot of data (and "touch points" - ie. interaction with the people who actually get some benefit), so it's possible to do proper cost-benefit analyses.
Of course this doesn't work because allocation is over-overpoliticized, people want all kinds of means-testing and other hoops for people to jump through. (Like the classic prove you still have a disability and people with Type I diabetes few years have to get a fucking paper.)
So when it comes to any kind of safety net it should be as automatic as possible, but at least as targeted as negative income tax. UBI might fit depending on one's definition.
Somebody should try a smart populist movement instead. My least favorite thing about my favored (or rather least disfavored) party is that we seem to believe “we must win without appealing to the populace too directly, that would simply be uncouth.”
One could argue that the quality of life per horse went up, even if the total number of horses went down. Lots more horses now get raised in farms and are trained to participate in events like dressage and other equestrian sports.
Someone said during the hype of "self-driving cars is the future!" that ICE/driver-driven cars will go the way of the horse: they'll be well-cared, kept in stables, and taken out in the weekends for recreation, on circuits but not on public roads..
Imagine it now, your future descendants existing solely to be part of some rich kid's harem.
'now instead of being work animals a few of you will be kept like pets by the tech bros'
> Bill Gates haters who interpret him talking about reducing the rate of birth in Africa
I'm not up to speed here -- is Bill Gates doing work to reduce the birth rates in Africa?
For example, interview from 2018: https://www.youtube.com/watch?v=0MMifQvuN08
When the Covid-truther geniuses "figured out" that "Bill Gates was behind Covid", they pulled out things like this as "proof" that his master plan is to reduce the world's population. Not to reduce the rate of increase, but to kill them (because of course these geniuses don't understand derivatives)...
Ah, got it. This sounds like more of a "repugnant conclusion" sort of problem where if you care about the well being of people who exist, then it is possible to have too large of a population.
We don't know what the author had in mind, but one has to really be tone deaf to let the weirdness of the discussion go unnoticed. Take a look at the last paragraphs in the text again:
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
While most of the text is written from cold economic(ish) standpoint it is really hard not to get bleak impression from it. And the last three sentences express that in vague way too. Some ambiguity is left on purpose so you can interpret the daunting impression your way.
The article presents you with crushing juxtaposition, implicates insane dangers, and leaves you with the feeling of inevitability. Then back to work, I guess.
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
Horses typically live between 25 to 30 years. I agree with OP that most likely those horses were not decimated (killed) but just died out and people stopped mass breeding them. Also as other noticed chart shows 'horses PER person in US'. Population between 1900 and 1950 increased from 1.5B to 2.5B (globally but probably similarly almost 70% increase in US).
I think depends what do you worry about:
1) `that human population decrease 50-80%`?
I don't worry about it even if that happen. 200 years ago human population was ~1 B today is ~8 B. At year 0 AD human population was ~0.250 B. Did we 200 years ago worry about it like "omg human population is only 1 B" ?
I doubt human population decrease 80% because of no demand for human as workforce but I don't see problem if it decrease by 50%. There will short transition period with surplus of retired people and work needed to keep the infrastructure but if robots can help with this then I don't see the problem.
2) `That we will not be needed and we will loose jobs?`
I don't see work like something in demand. Most people hate their jobs or do crappy jobs. What do people actually worry about that they will won't get any income. And actually not even about that - they worry that they will not be able to survive or be homeless. If there is improvement in production that food, shelter, transportation, healtcare is dirty cheap (all stuff from bottom maslov piramid) and fair distribution on social level then I also see a way this can be no problem.
3) `That we will all die because of AI`
This I find more plausable and maybe not even by AGI but earlier because of big social unrests during transition period.
As someone who raises horses and other animals, I can say with pretty high certainty that most of the horses were not allowed to "retire". Horses are expensive and time-consuming to care for, and with no practical use, most horses would have been sent not to the glue factory but (at that time) to the butcher and their non-meat parts used for fertilizer.
Yeah, I agree with what you said. It's not about the absolute number of people, but the social unrest. If you look at how poor we did our job at redistribution of wealth so far, I find it hard to believe that we will do well in the future. I am afraid of mass pauperisation and immiseration of societies followed by violence.
What's more important - "redistribution of wealth" or simply reducing the percentage of people living in abject poverty? And wouldn't you agree that by that measure, most of the world, including its largest countries, have done quite a good job?
https://www.un.org/en/global-issues/ending-poverty
From 1990 to 2014, the world made remarkable progress in reducing extreme poverty, with over one billion people moving out of that condition. The global poverty rate decreased by an average of 1.1 percentage points each year, from 37.8 percent to 11.2 percent in 2014.
I think the phrase "fair distribution on social level" is doing a lot of work in this comment. Do you consider this to be a common occurrence, or something our existing social structures do competently?
I see quite the opposite, and have very little hope that reduced reliance on labor will increase the equability of distribution of wealth.
It probably depends on the society you start out with, eg a high trust culture like Finland will probably fare better here.
Doesn't matter. The countries with most chaos and internal strife gets a lot of practice fighting wars (civil war). Then the winner of the civil war, who's used to grabbing resources by force, and the one that has perfected war skills due to survival of the fittest, goes round looking for other countries to invade.
Historically, advanced civilizations with better production capabilities don't necessarily do better in war if they lack "practice". Sad but true. Maybe not in 21st century, but who knows.
Yeah none of that fever dream is real. There's no "after" a civil war, conflicts persist for decades (Iraq, Afghanistan, Syria, Myanmar, Colombia, Sudan).
Check this out - https://data.worldhappiness.report/chart. The US is increasingly a miserable place to live in, and the worse it gets the more their people double down on being shitty.
Fun fact: Fit 2 lines on that data and you can extrapolate by ~2030 China will be a better place to live. That's really not that far off. Set a reminder on your phone: Chinese dream.
Um, yes you understood the article’s argument completely.
We are truly and profoundly fucked.
Well, in this case corporations stop buying people and just fire them instead of letting them retire. Or an army of Tesla Optimi will send people to the glue factory.
That at least is the fantasy of these people. Fortunately. LLMs don't really work, Tesla cars are still built by KUKA robots (while KUKA has a fraction of Tesla's P/E) and data centers in space are a cocaine fueled dream.
> And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
One of the many terrible things about software engineers their the tendency to think and speak as if they were some kind of aloof galaxy-brain, passively observing humanity from afar. I think that's at least partially the result of 1) identifying as an "intelligent person" and 2) computers and the internet allowing them to in-large-part become disconnected from the rest of humanity. I think they see that aloofness as being a "more intelligent" way to engage with the world, so they do it to act out their "intelligence."
I always thought intentionally applying an emotional distance was a strategy to help us see what's really happening, since allowing emotions to creep in causes us reach conclusions we want (motivated reasoning) instead of conclusions that reflect reality. I find it a valuable way to think. Then there's always the fact that the people who control the world have no emotional attachment to you either. They see you as something closer to a horse than their kin. I imagine a healthy dose of self-dehumanization actually helps us understand the current trajectory of our future. And people tend to vastly overvalue our "humanity" anyway. I'm guessing the ones that displaced horses didn't give much of a fuck about what happened to horses.
I wish I knew what you were so I could say "one of the many terrible things about __" about you. Anyway, I think you have an unhealthy emotional attachment to your emotions.
> I wish I knew what you were so I could say "one of the many terrible things about __" about you.
I'm a software engineer, so I beat you to it.
> I always thought intentionally applying an emotional distance was a strategy to help us see what's really happening, since allowing emotions to creep in causes us reach conclusions we want (motivated reasoning) instead of conclusions that reflect reality. I find it a valuable way to think.
And the problem is taking that too far, and doing it too much. It's a tactic "to help us see what's really happening," but it's wrong to stop there and forget things like values, interests, and morality.
> And people tend to vastly overvalue our "humanity" anyway.
WTF, man.
> I'm guessing the ones that displaced horses didn't give much of a fuck about what happened to horses.
Who cares what "the ones that displaced horses" thought? You're the horse in that scenario,and the horse cares. Another obnoxious software engineer problem is taking the wrong, often self-negating, perspective.
Yes, the robber who killed you to steal your stuff probably didn't mind you died. So I guess everything's good, then? No.
> Anyway, I think you have an unhealthy emotional attachment to your emotions.
Emotions aren't bad, they're healthy. But a rejection of them is probably a core screwed-up belief that leads to "aloof galaxy-brain, passively observing humanity from afar" syndrome.
There's probably parallel to the kind of obliviousness that gets you the behavior in the Torment Nexus meme ("Tech Company: "At long last, we have created the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus.'") i.e. "Software Engineer: 'At long last, I've purged myself of emotion and become perfectly logical like Lt. Cmdr. Data from the classic sci-fi Logical Robot Data Wants to Be Human and Feel Emotions."
Thus strikes more in the tone of Orwell who used a muted emotional register to elicit a powerful emotional response from the reader as they realize the horror of what’s happening.
> Have you ever thought that you would see a chart showing [...]
Yes, actually, because this has been a deep vein of writing for the past 100 or more years. There's The Phools, by Stanislav Lem. There's the novels written by Boris Johnson's father that are all about depopulation. There's Aldous Huxley's Brave New World. How about Logan's Run? There has been so much writing about the automation / technology apocalypse for humans in the past 100 years that it's hard to catalog it -- much of what I have read or seen go by in the vein I've totally forgotten.
It's not remotely a surprise to see this amp up with AI.
Yeah, I am familiar with these works of art and probably most people are. However, they were mostly speculative. Now we are facing some of their premises in the real world. And the guys who push the technology in a reckless way seem to notice this, but just nod their heads and carry on.
At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
Works of art, works of predictive programming, life imitating art -- what's the difference, if in the end the artistic predictions come true?
People have been thinking apocalyptic thoughts like these since.. at least Malthus's An Essay on the Principle of Population (1798). That's 227 years if you're keeping score. Probably longer; Malthus might only have been the first to write them down and publish them.
> Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population?
Yes, here's a youtube classic that put forth the same argument over a decade ago, originally titled "Humans need not apply": https://youtu.be/7Pq-S557XQU
Oh, _now_ computer industry people are worried? Kind of late to the party.
Computerization, automation and robotics, document digitization, the telecoms and wireless revolution, etc. have been upending peoples' employment on a massive scale since before the 1970s. The reaction of the technologists has been a rather insensitive "adapt or die", "go and retrain", and analogies to buggy whip manufacturers when the automobile became popular. The only reason people here suddenly give a hoot is because they think the crosshairs are drifting towards them.
Look at the current political environment.
In the US at least, there is a Congress incapable of taking action and a unilateral President fully on the side of tech CEOs with the heaviest investments in AI.
There is no evidence supporting short term optimism. Every indication the large corporations dictating public policy will treat us exactly like those horses when it comes to economic value.
> the AI atmosphere is absolutely nuts to me
It reminds me of "You maniacs! You blew it up! Goddamn you all to hell!" from the original Planet of the Apes (1968), https://youtu.be/mDLS12_a-fk?t=71
Quite ironically, the scene features a horse.
You can kind of separate the technical side of what will likely happen - AI get smarter and can do the jobs - with how we deal with that. Could be heaven like with abundance and no one needs to work, or post apocalyptic dystopia or likely somewhere in the middle.
We collectively have a lot of choice on the how we deal with it part. I'm personally optimistic that people will vote in people friendly policies when it comes to it.
Not seeing any horse heavens, do you have reason to believe humans (i.e. those not among the ruling class) are going to have a different fate from the horses?
I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...
We are in control (for now). The horses were not. The whole alignment debate is basically about keeping us in control.
My experience so far has been that the knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
In this instance, in particular, I wouldn't expect our preferences to bear any relevance.
> knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
I don’t know if you are intentionally being vague and existential here. However, context matters, and the predictive power is zero sounds unreasonable in the face of history.
I think humans learning that diseases were affecting us and thus leading to solutions like antibiotics and vaccines. It was not guaranteed, but I’m skeptical of the predictive power being zero.
I took the article as meaning white collar tech jobs that will go away, so those people will need to pivot their career, not humans.
However, it does seem like time for humanity to collectively think hard about our values and goals, and what type of world and lives we want to have in an age where human thought, and perhaps even human physical labor are economically worthless. Unfortunately this could not come at a worse time with humanity seemingly experiencing a widespread rejection of ideals like ethics, human rights, and integrity and embracing fascism and ruthless blind financial self interest as if they were high minded ideals.
Ironically, I think tech people could learn a lot here from groups like the Amish- they have clearly decided what their values and goals are, and ruthlessly make tech serve them, instead of the other way around. Despite stereotypes, Amish are often actually heavy users of, and competent with modern tech in service of making a living, but in a way that enforces firm boundaries about not letting the tech usurp their values and chosen way of life.
The implication is very clearly about “killing” jobs, not killing people.
But what happens when the people without jobs can’t buy food and starve to death?
Incentives rule everything.
For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
Today, the stock market and material wealth dominates. If elite dominance of the means of production requires the immiseration of most of the public, that's what we'll get.
> For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
That's almost 100% backwards. The Republic expanded. The Empire, not so much.
GP appears to be using "empire" as in "imperalistic" instead of as in "emperor".
> Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
Not sure if by accident or not, but that’s what we are according today’s “tech elite”.
Therefore, the most profitable disposition for this dubious form of capital is to convert them into biodiesel, which can help power the Muni buses
https://www.goodreads.com/work/quotes/55660903-patchwork-a-p...I think we have a bunch of people in the United States who see what we elected for leadership and the choices he made to advise him, and they have given up all hope. That despondent attitude is infusing their opinions on everything. But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
And low information business leaders will attempt to do all the awful things described here and the free market will eliminate them from the game grid one horrible boss at a time. But if you surround yourself with the AI doomers and bubblers, how will you ever encounter or even consider positive uses of the technology? What an awful place to work Anthropic must be if they truly believe they are working on the metaphorical equivalent of the Alpha Omega bomb. Spoilers: they're not.
Meanwhile, in the rest of the world, many look forward to harnessing AI to ameliorate hunger, take care of the elderly, and perform the more dangerous and tedious jobs out there. Anthropic guy needs to go get a room with Eliezer Yudkowsky. I guess the US is about get horsed by the other 96% of the planet.
Go ahead, compare me to a horse, a gasoline engine, or even call me a meatbag. Have we become little more than Eloi snowflakes to be so offended by that?
But I guess as long as an electoral majority here continues to cheer on one man draining the juice of this country down to a bitter husk, the fun and games will continue.
Minor nit:
Machines to “take care of the elderly” is one of the worst possible uses of this technology. We desperately need more human interaction between the old and the young, not less.
> But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
At this point in time, his whimsy is the only thing holding back younger, more extreme acolytes from doing what they want. Once he's gone, lol.
Evidently they want free buses and groceries. Which given the end of human employees, isn’t the worst priorities in the world.
A bunch of charisma 3 acolytes. Only a select few get to be Zaphod Beeblebrox, swirlies for everyone else who tries...
Yes. Follow in the path of the tech leaders. They are optimists. They totally aren't building doomsday bunkers or trying to build their data centers with their own nuclear power plants to remove them from society and create self contained systems. Oh wait. Crap...
American tech leaders are just as bad, leading the charge straight into the abyss. But if you close your mind to the rest of the world, I can see why you'd see a 0 1 choice here. That's all the corporate media and influencers write these days, all the way from Paul Krugman to Corey Doctorow. And let's not even get started on the Three Men and an ASIC house of AI circle jerkers.
I mean if you're the sort that thinks Greta Thunberg and Eliezer Yudkowsky are agents of the Antichrist, it's long overdue to touch grass. And I don't think he believes that, but I think he thought people were stupid enough to buy it so he ran with it. Can't blame him for trying!
But given the right's hatred of renewables and the left thinks nuclear power plants can explode like atomic bombs, I'd be pushing for gas and nuclear to power my data centers too.
TLDR: you're being fed a false narrative that this is a 0 1 choice, but I guess it will take the rest of the world to demonstrate that not the US.
American culture actively punishes compassion, then gaslights you about it.
https://www.census.gov/library/visualizations/interactive/te... Look at all the professions on the bottom right: Teachers, therapists, clergy, social workers, etc. It’s not a coincidence that cruel people take top positions.
Money isn't the only thing a job provides. Those are all professions that provide a sense of meaning, so monetary compensation doesn't need to be as high to attract and keep people.
Yeah that’s part of the gaslighting. It takes 2 seconds to realize it’s wrong, but people parrot it like Fox News talking points.
Every profession attracts people who enjoy it, eg lawyers tend to enjoy adversarial debate. Lots of countries don’t treat their teachers like shit. It’s a choice.
Yeah I'm sure that felt good to say but it's fucking bullshit. I'm not a software dev because it's my calling, I'm a software dev because I'm pretty sure I wouldn't make any money doing anything else. I only briefly considered teaching as a profession (knowing I'd be functionally poor as a result) despite being quite sure I'd enjoy it (I tutored and instructed all through high school and college) because the desire to make money combined with the current state of schools ultimately won out. Other people who are more passionate about teaching and/or don't have the same skills as me would have gone the other way.
There will always be fucking teachers, pilots, etc, because people WANT to be those things, and it's gonna take more than "nuh uh, Fox News" to dislodge that belief.
That said, I didn't say it isn't a public policy choice, I'm saying the "passion factor" is the reason they are ABLE to offer these jobs at low wages.
It's been a decade or so but I'm mostly called "resource" at work, as in Human Resource. Barely collegue, comrade, co-worker... just a resource, a plug in the machine that needs to be replaced by an external resource to improve profit margins.
Is it good when number of human go up? Is bad when go down?
That depends on what you think jobs and the economy are for, generally.
If you think the purpose of the economy is for the economy to be good then it doesn't matter. If you think it exists to serve humanity then... You really wouldn't need to ask the question, I imagine.
The number of humans doesn't exactly serve humanity though either. Both of those variables are mostly irrelevant to actual human happiness and flourishing. In fact as far as I can tell they are actively harmful, for several different reasons.
I would say that it is bad when it has large derivative (positive or negative). However, the problem is not about >number of human beings< but about making agency that existing people have obsolete.
We're making slave morality obsolete. The sense of self dependent entirely on external performance. Reality is making it so that we will have very little value in that domain very soon. Because everything we do now will be done better faster cheaper by the machines.
That's going to be a pretty rough transition. I think the economic aspects will be pretty straightforward by comparison to the psychological upheaval!
It's bad if it goes down by more than about 1.2% per year. That would mean zero births, present-day natural deaths. Of course zero births isn't presently realistic, and we should expect the next 10-30 years to significantly increase human lifespan. If we assume continued births at the lowest rates seen anywhere on the planet, and humans just maxing out the present human lifespan limit, then anything more than about a 0.5% decrease means someone is getting murked.
> And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?"
It shines through that the most fervent AI Believers are also Haters of Humans.
> I may have developed some kind of paranoia reading HN recently
My comments being downvoted, pretty rare lately, were about never discussed but legitimate points about AI that I validated IRL. I have no resonance about the way AI is discussed on HN and IRL, to the point that I can't rule out more or less subtle manipulation on the discussions.
I wouldn't read too much into it. Anytime I post something silly and stupid, it becomes the top comment. Anytime I post something important, I get downvotes. That's just normal. I think that's just human nature...
And the votes are pretty random too. Sometimes it'll go from -5 to +10 in the span of a few hours. Just depends on who's online at the time...
And yet don't they pull on our heartstrings? Isn't that funny? A random number generator for the soul...
I don't think that bots have taken over HN. I meant that the frontier of the tech research brags about their recklessness here and the rest of us have become bystanders to this process. Gives me goosebumps.
Honestly I can't tell if your incredulity is at the method of analysis for being tragically mistaken or superficial in some way, at the seemingly dehumanizing comparison of beloved human demonstrations of skill (chess, writing) to lowest common denominator labor, or the tone of passive indifference to computers taking over everything.
I think the comparisons are useful enough as metaphors, though I wonder at analysis, because it sounds like if someone took a Yudkowsky idea and talked about it like a human, which might make a bad assumption go down more smooth than it should. But I don't know.
It isn't just AI. So much of the US "Tech"/VC scene is doing outright evil stuff, with seemingly zero regard for any consequence or even a shred of self awareness.
So much money is spent on developing gambling, social media, crypto (fraud and crime enabler) and surveillance software. All of these are making people's lives worse, these companies aren't even shy about it. They want to track you, they want you to spend as much time as possible on their products, they want to make you addicted to gambling.
Just by how large these segments are, many of the people developing that software must be posting here, but I have never seen any actual reflection on it.
Sure, I guess developing software making people addicted to gambling pays the bills (and more than that), but I haven't seen even that. These industries just exist and people seem to work for them as if it was just a normal job, with zero moral implications.
I'd like to note here that the lifespan of a horse is 25-30 years. They were phased out not with mass horse genocide, but likely in the same way we phase out Toyota Corollas that have gotten too old. Owners simply didn't buy a new horse when the old one wore out, but bought an automobile instead.
Economically it is no different from the demand for Mitsubishi's decreasing except the vehicle in this case eats grass, poops, and feels pain.
If you want to analogize with humans, a gradual reduction in breeding (which is happening anyways with or without AI) is probably a stronger analogy than a Skynet extinction scenario.
Truth is this is no different than the societal trends that were introduced with industrialization, simply accelerated on a massive scale.
The threshold for getting wealth through education is bumping up against our natural human breeding timeline, delaying childbirth past natural optimal human fertility ages in the developed world. The amount of education needed to achieve certain types of wealth will move into the decades causing even more strain on fertility metrics. Some people will decide to have more kids and live off purely off whatever limited wellfare the oligarchs in charge decide is acceptable. Others will delay having children far past natural human fertility timespans or forgo having children at all.
If we look at it this way, a reduction in human population would be contingent on whether you think human beings exist and are bred for the purposes of labor.
I believe most people would agree with me that the answer is NO.
The analogy to horses here then is not individuals, but specific types of jobs.
it's a con job and strawman take. if we collectively think token generators can replace humans completely, well then we've already lost the plot as a global society
Honestly, the answer for me is yes. I had expected it. The signs were in all the comments that take the market forces for granted. All the comments that take capitalism as a given and immutable law of nature. They were in all the tech bros that never ever wanted to change anything but the number of zeros in their bank account after a successful exit. So yes, I had that thought you are finally having too.
Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:
1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job."The only reason to reduce headcount is to remove people who already weren’t providing much value."
I wish corporations really acted this rationally.
At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered.
That's what I see when companies cut juniors as well. AI cannot replace a junior because a junior has full and complete agency, accountability, and purpose. They retain learning and become a sharper bespoke resource for the business as time goes on. The PM tells them what to do and I give them guidance.
If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time.
> well I can't leave a task with the LLM and come back to it tomorrow
You could actually just do that, leave an agent on a problem you would give a junior, go back on your main task and whenever you feel like it check the agent's work.
Everything I’ve read about experiments where they’ve tried this have been massive failures. The AIs always get stuck and can’t make further progress at some point when given the full responsibilities of a human employee.
It lacks the ability to self correct and do all the adjacent tasks like client comms etc. So if I come back to it in the afternoon I may have wasted a day in business terms, because I will need to try again tomorrow. What do I tell the client, sorry the LLM failed the simple task so we will have to try again tomorrow? Worse, lie and say sorry this 2 hour task could not be achieved by our developers today. Either way we look incompetent (because realistically, we were not competent, relying on a tool that fails frequently)
I'm sorry but I'm not familiar with the context you mention, have not worked in a job where I had to communicate with clients and I find it hard to imagine a job where a junior would have to communicate with a client on a 2 hour task. Why would you want a junior to be the public face of your company?
I'm a full-stack developer, Recently i find that almost 90% of my work deadlines have been brought forward, and the bosses' scheduling has become stricter. the coworker who is particularly good at pair programming with AI prefers to reduce his/her scheduling(kind of unconsciously)。Work is sudden,but salary remains steady。what a bummer
But wouldnt these spreadsheets be tracking something like total revenue? If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?
I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.
First of all, it's not unlikely that the dentist is the owner. And in any case, when you have a small system of less than 150 people, it's easy enough for a handful of people to see what's actually going on.
Once you get to something in the thousands or tens of thousands, you just have spreadsheets; and anything that doesn't show up in that spreadsheet might as well not exist. Furthermore, you have competing business units, each of which want to externalize their costs to other business units.
Very similar to what GP described -- when I was in a small start-up, we had an admin assistant who did most of the receipt entry and what-not for our expense reports; and we were allowed to tell the company travel agent our travel constrants and give us options for flights. When we were acquired by a larger company, we had to do our own expense reports, and do our own flight searches. That was almost certainly a false economy.
And then when we became a major conglomerate, at some point they merged a bunch of IT functions; so the folks in California would make a change and go home, and those of us in Europe or the UK would come in to find all the networks broken, with no way to fix it until the people in California started coming in at 4pm.
In all cases, the dollars saved is clearly visible in the spreadsheet, while the "development velocity" lost is noisy, diffuse, and hard to quantify or pin down to any particular cause.
I suppose one way to quantify that would be to have the Engineering function track time spent doing admin work and charge that to the Finance function; and time spent idle due to IT outages and charge that to the IT department. But that has its own pitfalls, no doubt.
Problem with this analogy is that software development != revenue. The developers and IT are a cost center. So yea in a huge org one of the goals is to reduce costs (admin) spent on supporting a cost center.
Doctors generate revenue directly and it can all be traced, so even an extra 20 minutes out of their day doing admin stuff instead of one more patient or procedure is easily noticeable, and affects revenue directly.
You mean, there's a 1-1 correlation between the amount of pointless admin a doctor has to do and the number of patients he sees (and thus the revenue of the clinic). It should be visible on the spreadsheet. Whereas, there's not a 1-1 correlation between the pointless admin a software engineer has to do and the number of paying customers a company gets.
But then, why do large orgs try to "save costs" by having doctors do admin work? Somehow the wrong numbers get onto the spreadsheet. Size of the organization -- distance between the person looking at the spreadsheet and the reality of people doing the work -- likely plays a big part in that.
Probably because dentists are more cash based and less battling with insurance for payments.
Customers are more price sensitive so the dentists have to be too.
> If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?
I am going to assume that the Doctors are just working longer hours and/or aren't as attentive as they could be and so care quality declines but revenue doesn't. Overworking existing staff in order to make up for less staff is a tried and true play.
> I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.
By conflating 'Doctors' and 'Dentists' you are basically saying the equivalent of 'all Doctors' and 'Doctors of a certain specialty'. Dentists are 'Doctors for teeth' like a pediatrician is a 'Doctor for children' or an Ortho is a 'Doctor for bones'.
Teeth need maintenance, which is the time consuming part of most visits, and the Dentist has staff to do that part of it. That in itself makes the specialty not really that comparable to a lot of others.
I feel like that's how you get Microsoft where each division has a gun pointed at the other division
Doesn't really matter the type of doctor, spending all their time on revenue-generating activities would seem to be better than only 75% generating revenue and 25% on "administrative and bureaucratic tasks" that don't generate revenue and which could be accomplished by a much lower-paid employee ("secretaries and assistants").
Perhaps you're correct that the doctors are simply working much longer hours but that's one group of employees among a hospital's staff who do generally have a lot of power and aren't too easy to make extraordinary demands of.
There are reasons why the claim might be right, as noted by others, and there are reasons why the claim may not be right, as noted by you. If you think that your idea of how Doctors operate in a hospital is more compelling than other people's explanations of why the claim is legitimate, then keep believing that.
Funny the original post doesn’t mention AI replacing the coding part of his job.
There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.
I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.
>There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).
The most interesting questions are the ones that assume human equivalency.
Suppose an AI can produce like a human.
Are you ok with merging that code without human review?
Are you ok with having a codebase that is effectively a black box?
Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?
Are you ok with being dependent on the company providing this code generation?
Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?
Will we be ok if the well of public technical discussion LLMs are feeding from dries up?
Those are the interesting debates I think.
> Are you ok with having a codebase that is effectively a black box?
When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it.
>When was the last time you looked at the machine code your compiler was giving you?
You could rephrase that as “when was the last time your compiler didn’t work as expected?”. Never in my whole career in my case. Can we expect that level of reliability?
I’m not making the argument of “the LLM is not good enough”. that would brings us back to the boring dissuasion of “maybe it will be”.
The thing is that human langauge is ambiguous and subject to interpretation, so I think we will have occasionally wrong output even with perfect LLMs. That makes black box behavior dangerous.
We certainly can't expect that with LLMs now but neither could compiler users back in the 1970s. I do agree that we probably won't ever have them generating code without more back and forth where the LLM complains that its instructions were ambiguous and then testing afterwards.
I dont think it really matters if your or I or regular people are ok with it if the people with power are. There doesnt seem to be much any of us regular folks can do to stop it, especially as Ai eliminates more and more jobs thus further reducing the economic power of everyday people
I disagree. There are personal decisions to make:
Do you bet on keeping your technical skills sharpened, or stop and focus on product work and AI usage?
Do you work for companies that go full AI or try to find one that stays “manual”?
What advice do you offer as a technical lead when asked?
Leadership ignoring technical advice is nothing new, but there is still value in figuring out those questions.
> What advice do you offer as a technical lead when asked
Learn to shoot a gun and grow your own food, that's my advice as a technical lead right now
Have you ever double-checked (in human fashion, not just using another calculator) the output from a calculator?
When calculators were first introduced I'm sure some people such as scientists and accountants did exactly that. Calculators were new, people likely had to be slowly convinced that these magic devices could be totally accurate.
But you and I were born well after the invention of calculators, our entire lives nobody has doubted that even a $2 calculator can immediately determine the square root of an 8-digit number and be totally accurate. So nobody verifies, and also, a lot of people can't do basic math.
I predict by March 2026, AI will be better at writing doomer articles about humans being replaced than top human experts.
Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it.
Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be.
Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it.
I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.
If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.
Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.
The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs
> Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect
Fully agree, in fact, this has literally happened to me a week ago -- ChatGPT was confidently incorrect about its simple lock structure for my multithreaded C++ program, and wrote paragraphs upon paragraphs about how it works, until I pressed it twice about a (real) possibility of some operations deadlocking, and then it folded.
> Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.
As an university assistant professor trying to keep up with AI while doing research/teaching as before, this also happens to me and I am dismayed by that. I am certain there are models out there that can solve IMO and generate research-grade papers, but the ones I can get easy access to as a customer routinely mess up stuff, including:
* Adding extra simplifications to a given combinatorial optimization problem, so that its dynamic programming approach works.
* Claiming some inequality is true but upon reflection it derived A >= B from A <= C and C <= B.
(This is all ChatGPT 5, thinking mode.)
You could fairly counterclaim that I need to get more funding (tough) or invest much more of my time and energy to get access to models closer to what Terrence Tao and other top people trying to apply AI in CS theory are currently using. But at least the models cheap enough for me to get access as a private person are not on par with what the same companies claim to achieve.
I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here?
I mean, I'm just some guy, but in my mind:
- They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago
- It's clearly possible to solve this, since we humans exist and our brains don't have this problem
There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training.
The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training.
There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells.
Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology.
The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon.
That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs.
The complexity of actual biological neural networks became clear to me when I learned about the different types of neurons.
https://en.wikipedia.org/wiki/Neural_oscillation
There are clock neurons, ADC neurons that transform analog intensity of signal into counts of digital spikes, there are neurons that integrate signals over time, that synchronizes together etc etc. Transformer models have none of this.
Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes.
idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"
It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet
> There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...
This is what I also see. AI is used sparingly. Mostly for information lookup and autocomplete. It's just not good enough for other things. I could use it to write code if I really babysit it and triple check everything it does? Cool cool, maybe sometime later.
Who does typical code sweat shops churning out one smallish app at a time and quickly moving on? Certainly not your typical company-hired permanent dev, they (us) drown in tons of complex legacy code that keeps working for past 10-20 years and company sees no reason to throw it away.
Those folks that do churn out such apps, for them its great & horrible long term. For folks like me development is maybe 10% of my work, and by far the best part - creative, problem-solving, stimulating, actually learning myself. Why would I want to mildly optimize that 10% and loose all the good stuff, while speed wouldn't visibly even improve?
To really improve speed in bigger orgs, the change would have to happen in processes, office politics, management priorities and so on. No help of llms there, if anything trend-chasing managers just introduce more chaos with negative consequences.
> The only reason to reduce headcount is to remove people who already weren’t providing much value.
There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.
All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.
I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.
The thing that replaces the old memos is not email, its meetings. It not uncommon for meetings with hundreds of participants that in the past would be a simple memo.
It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it.
The crucial observation is the fact that automation has historically been a net creator of jobs, not destroyer.
Sure, if you're content to stack shelves.
AI isn't automation. It's thinking. It automates the brain out of human jobs.
You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.
> Sure, if you're content to stack shelves.
Why this example? One of the things automation has done is reduce and replace stevedores, the shipping equivalent of stacking shelves.
Amazon warehouses are heavily automated, almost self-stacking-shelves. At least, according to the various videos I see, I've not actually worked there myself. Yet. There's time.
> AI isn't automation. It's thinking. It automates the brain out of human jobs. You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.
Right up until the AI is good enough to control the robot that can do that job. Which may or may not be humanoid. (Plus side: look how long it's taking for self-driving cars, how often people think a personal anecdote of "works for me" is a valid response to "doesn't work for me").
Even before the AI gets that good, a nice boring remote-control android doing whatever manual labour could outsource the "controller" position to a human anywhere on the planet. Mental image: all the unemployed Americans protesting outside Tesla's factories when they realise the Optimus robots within are controlled remotely from people in 3rd world countries getting paid $5/day.
Yes, AI is automation. It automates the implementation. It doesn't (yet?) automate the hard parts around figuring out what work needs to be done and how to do it.
The sad thing is that for many software devs, the implementation is the fun bit.
Except it isn't thinking. It is applying a model of statistical likelihood. The real issue is that it's been sold as thinking, and laypeople believe that it's thinking, so it is very likely that jobs will be eliminated before it's feasible to replace them.
People that actually care about the quality of their output are a dying breed, and that death is being accelerated by this machine that produces somewhat plausible-looking output, because we're optimizing around "plausible-looking" and not "correct"
That observation is only useful if you can point at a capability that humans have that we haven't automated.
Hunter-Gatherers were replaced by the technology of Agriculture. Humans still are needed to provide the power to plow the earth and reap the crops.
Human power was replaced by work animals pulling plows, but you only humans can make decisions about when to harvest.
Jump forward a good long time,
Computers can run algorithms to indicate when best to harvest. Humans are still uniquely flexible and creative in their ability to deal with unanticipated issues.
AI is intended to make "flexible and creative" no longer a bastion of human uniquness. What's left? The only obvious one I can think of is accountability: as long as computers aren't seen as people, you need someone to be responsible for the fully automated farm.
'Because thing X happened in past it is guaranteed to happen in the future and we should bet society on it instead of trying to you know, plan for the future. Magic jobs will just appear, trust me'
> At first, there were many people typing, then later [...]
There were more people typing than ever before? Look around you, we're all typing all day long.
I think they meant that there was a time when people’s jobs were:
1. either reading notes in shorthand, or reading something from a sheet that was already fully typed using a typewriter, or listening to recorded or live dictation
2. then typing that content out into a typewriter.
People were essentially human copying machines.
This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.
Overseeing robot is a time limited activity. Even building robot has a finite horizon.
Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.
Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.
This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.
Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?
Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.
> because end of the day people can use this tech to create wealth at unprecedented scale
_Where?_ so far the only technology to have come out widespread for this is to shove a chatbot interface into every UI that never needed it.
Nothing has been improved, no revelatory tech has come out (tools to let you chatbot faster don’t count).
Honestly, this comment sounds like someone dismissing the internet in 1992 when the web was all text-based and CompuServe was leading-edge. No "revelatory tech" just yet, but it was right around the corner.
In the backend, not directly customer facing. Coca cola is two years in running ai ads. Lovable is cash positive, and many of the builder there are too. A few creators are earning a living with suno songs. Not millions mind but they can live off their ai works.
If you dont see it happening around you, you're just not looking.
So, a company cutting costs, a tool to let you chatbot faster, and musical slop at scale.
This doesn't sound like "creating wealth at unprecedented scale"
I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.
> Computers replaced humans as the best chess players, not computers with human oversight.
Oh? I sat down for a game of chess against a computer and it never showed up. I was certain it didn't show up because computers are unable to without human oversight, but tell me why I'm wrong.
Apparently human chess grandmasters also need “oversight” from airplanes, because without those, essentially none of them would show up at elite tournaments.
Things like trains, boats, and cars exist. Human chess grandmasters can show up to elite tournaments, and perform while there, without airplanes. Computer chess systems, on the other hand, cannot do anything without human oversight.
> Things like trains, boats, and cars exist. Human chess grandmasters can show up to elite tournaments, and perform while there, without airplanes.
Those modes of transport are all equivalent to planes for the point being made.
I (not that I'm even as good as "mediocre" at chess) cannot legally get from my current location to the USA without some other human being involved. This is because I'm not an American and would need my entry to be OKed by the humans managing the border.
I also doubt that I would be able to construct a vessel capable of crossing the Atlantic safely, possibly not even a small river. I don't even know enough to enumerate how hard that would be, would need help making a list. Even if knew all that I needed to, it would be much harder to do it from raw materials rather than buying pre-cut timber, steel, cloth (for a sail), etc. Even if I did it that way, I can't generate cloth fibres and wood from by body like plants do. Even if I did extrude and secrete raw materials, plants photosynthesise and I eat, living things don't spontaneously generate these products from their souls.
For arguments like this, consider the AI like you consider treat Stephen Hawking: lack of motor skills aren't relevant to the rest of what they can do.
When AI gets good enough to control the robots needed to automate everything from mining the raw materials all the way up to making more robots to mine the raw materials, then not only are all jobs obsolete, we're also half a human lifetime away from a Dyson swarm.
> Those modes of transport are all equivalent to planes for the point being made.
The point is that even those things require oversight from humans. Everything humans do requires oversight from humans. How you missed it, nobody knows.
Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything.
Sad state of affairs when not even the HN crowd understands such basic concepts about computing anymore. I guess that's what happens when one comes to tech by way of "Learn to code" movements promising a good job instead of by way of having an interest in technology.
> Everything humans do requires oversight from humans. How you missed it, nobody knows.
'cause you said:
Computer chess systems, on the other hand, cannot do anything without human oversight.
The words "on the other hand" draws a contrast, suggesting that the subject of the sentence before it ("chess grandmasters") are different with regard to the task ("show up to elite tournaments"), and thus can manage without the stated limitation ("anything without human oversight").> Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything.
OK, and? Nobody's claiming "today" is that day. Even Musk despite his implausible denials regarding Optimus being remote controlled isn't claiming that today is that day.
The message you replied to was this: https://news.ycombinator.com/item?id=46201604
The chess-playing example there was an existing example of software beating humans in a specific domain in order to demonstrate that human oversight is not a long-term solution, you can tell by the use of the words "end state", and even then hypothetical (due to "if"), as in:
If successful, the end state is full automation
There was a period where a chess AI that was in fact playing a game of chess could beat any human opponent, and yet would still lose to the combination of a human-AI team. This era has ended and now the humans just hold back the AI, we don't add anything (beyond switching it on).Furthermore, there's nothing at all that says that an insufficiently competent AI won't wipe us out:
And as we can already observe, there's clearly nothing stopping real humans from using insufficiently competent AI due to some combination of being lazy and/or the vendors over-promising what can be delivered.
Also, we've been in a situation where the automation we have can trigger WW3 and kill 90% of the human population despite the fact that the very same automation would be imminently destroyed along with it since the peak of the Cold War, with near-misses on both US and USSR systems. Human oversight stopped it, but like I said, we can already observe lazy humans deferring to AI, so how long will that remain true?
And it doesn't even need to be that dramatic; never mind global defence stuff, just correlated risks, all the companies outsourcing all their decisions to the same models, even when the models' creators win a Nobel prize for creating them, is a description of how the Black–Scholes formula and its involvement in the 2008 financial crisis — and sure, didn't kill us all, but this is just an illustration of a failure mode rather than consequences.