I am disappointed in the AI discourse
steveklabnik.com65 points by steveklabnik 10 months ago
65 points by steveklabnik 10 months ago
The offered discourse on AI is “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”
That doesn’t exactly leave a lot of room for people to feel the need to be involved in a discourse about it. For one thing the majority of people aren’t all workaholics looking for extra hobby time.
The author mentions ChatGpt can search the web. Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI.
Maybe the discourse sucks because the reality of it sucks?
> "Eventually we won’t even need you and you can go get a hobby to spend the rest of your life."
I think the problem is that the statement is more like:
"Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."
> "Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."
You reminded me of an interesting article from 10 years ago, so I went ahead and re-posted it: https://news.ycombinator.com/item?id=44119705
Yes but the lower classes dying in a ditch is considered a hobby by the upper class.
Ahh yeah, the alternative. AI can free you from your job so you can be hunted for sport!
If anyone wants to know exactly how that would be achieved, just look at how Google does "support" right now. No need to predict the future.
Google's "support" is a robot that sends passive aggressive mocking emails to those who were screwed over by another robot that made up reasons to lock them out of their digital lives [1]. It allows Google to save a ton of money while evading accountability.
It's the same thing with the latest overhyped robots. It won't even matter whether or not it's actually competent at the thing it's supposed to do. It will replace people regardless.
> I think the problem is that the statement is more like:
> "Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."
Exactly. If...
>> “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”
...was even remotely true, we'd have already had that outcome, before AI.
How about the one some of us believe:
“Isn’t it cool that I don’t have to write boiler plate and can prototype quickly? My job isn’t replaced because coding is not my job — it’s solving domain specific problems?”
I’m in my late 40s, have written code for 3 decades (yes started in my teens) and have always known that the code was never the point. Code is a means of solving a problem, mostly unrelated to computers (unless you work on pure software tooling).
This is why I chose not to study computer science. I studied something else and kept coding. I’ve always felt that CS as a field is oversubscribed because of $$$ dangling due to big tech.
So many fields are computational these days and the key is to apply coding to these fields. For instance, a PhD in biology gets you nowhere these days so many biologists these days are computational biologists or statisticians. Same with computational chemists, etc.
For most of my career I’ve written code, but in service of solving a real world physical problem (sensor based monitoring, logistics, mathematical modeling, optimization).
And we must all enjoy using Llms because it helps us code? Even if you enjoy coding? What if coding helps me relax? Relaxing isn’t my job.
I didn’t have to write boiler plate before Llms with a code scaffolding tool like create-express-app.
literally no one is telling you to do anything.
you can just chill out and have whatever hobbies you want, using whatever tools you want.
> My job isn’t replaced because coding is not my job — it’s solving domain specific problems?
I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems? Look at how bad text generation, code generation, audio generation, image generation was five years ago versus how capable it is today. Video generation wasn't even conceivable then.
As an equally middle-aged person with children I'm less worried about myself than the next generation. What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?
The economy only works because people consume goods and services. If they can't do that, then capital can't make any money. So whatever the case is, capital needs to ensure that the ability to consume is ensured.
This is the same conversation that happens decade after decade.
I agree with you, but no one listened back then, why would they ever think about listening now.
Capital formation comes first before everything else, not the other way around, when you have nothing to trade that's of value it simply can't happen, and inevitable hyper-inflation/deflationary cycles begin which once started can't be stopped.
These people think, survival is guaranteed, jobs are guaranteed, the how doesn't matter; it happens because some politician says it does; reality doesn't matter.
That's the line and level of thinking we are dealing with here. How do you convince someone that if they do something, they and their children may die as a consequence; if they can't make that connection.
Communication ceases being possible in a noisy medium at a certain point according to Shannon. Pretty sure we've crossed that point, and where we may have been able to discern and separate the garbage previously, now through mimicry its all but impossible.
Intelligent people don't waste their efforts on lost causes. People make their own decisions, and they and their children will pay for the consequences of those choices; even if they didn't realize that was the choice they were making at the time they made it.
> I agree with you, but no one listened back then, why would they ever think about listening now.
Because we lead vastly better lives today than 100 years ago, when everyone was also raging about technology stealing jobs. The economy has to adapt to technology changes, there is no other way. It is a self healing system. If technology removes a lot of jobs, then new jobs are created. It has to be this way, don't you see?
It can be a self-healing system, and capitalism is generally self-healing, but the former is not necessarily the case in all economic systems.
There is a critical point where factors, and producers leave the market because requirements cannot be met in profit in terms of purchasing power (invariant to inflation). You might think those parties are all that there is, but that's not the case, there is a third-party, the state and its apparatus.
With money-printing, any winner chosen by the state becomes its apparatus. Money printing takes many forms but the most common is debt, more accurately non-reserve debt.
That third-entity is not bound by profit constraints and outcompetes rising in the wake of the destruction it causes, and this is not self-healing, its self-sustaining, and slow, and it does collapse given sufficient time.
New jobs aren't being created in sufficient volumes to provide for the population. If anything, the jobs have been removed en-masse on the mere perception that AI can replace people.
You seem to rely heavily on fallacy in your reasoning. Specifically, survivorship bias. Things are being done that cannot be undone. There are fundamental limits, after which the structures fail.
This is what is coming.
> This is what is coming.
You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming, and yet you state it so authoritatively.
I resort to evidence from history, because these same arguments happen decade after decade, and the doom scenario has not manifested yet. I also find the anti-AI view narrow minded. You're only able to imagine one scenario, the dystopian scenario. And yet none of know this is the likely outcome. It could well be that AI actually does increase the means of productivity, we invent new medical cures, we invent new ways to grow food, we clean up our energy generation, work becomes more optional as governments (who desperately want people to keep electing them) find ways of redistributing all the newly created wealth.
I don't know which will happen, and neither does anyone else.
This is naïve, the government and corporations are already working towards the dystopian result. Just because we don’t “know” doesn’t mean people can’t make an educated guess. You need people to put Llms on the good path before you can say the bad path won’t happen. Right now people are loyal to corporations that offer it, that’s the bad path.