Everyone in Seattle hates AI
jonready.com784 points by mips_avatar 13 hours ago
784 points by mips_avatar 13 hours ago
Someone wrote on HN the (IMO) main reason why people do not accept AI.
AI is about centralisation of power
So basically, only a few companies that hold on the large models will have all the knowledge required to do things, and will lend you your computer collecting monthly fees. Also see https://be-clippy.com/ for more arguments (like Adobe moving to cloud to teach their model on your work).For me AI is just a natural language query model for texts. So if I need to find something in text, make join with other knowledge etc. things I'd do in SQL if there was an SQL processing natural language, I do in LLM. This enhances my work. However other people seem to feel threatened. I know a person who resigned CS course because AI was solving algorithmic exercises better than him. This might cause global depression, as we no longer are on the "top". Moreover he went to medicine, where people basically will be using AI to diagnose people and AI operators are required (i.e. there are no threats of reductions because of AI in Public Health Service)
So the world is changing, the power is being gathered, there is no longer possibility to "run your local cloud with open office, and a mail server" to take that power from the giants.
But why not? AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.
As an average consumer, I actually feel like i'm less locked into gemini/chatgpt/claude than I am to Apple or Google for other tech (i.e. photos).
but the opposite is actually true. u can use ai to bypass a lot of SaaS solutions
So you are saying now that you can bypass a lot of solutions offered by a mix of small/large providers by using a single solution from a huge provider, this is the opposite of a centralization of power?
with ai specialized hardware you can run the open source models locally too and without without the huge provider stealing your precious IP
Ok so a few thoughts as a former Seattleite:
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
The hate starts with the name. LLMs don't have the I in AI. It's like marketing a car as self-driving while all it can do is lane assist.
Then what do they have?
Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?
Some people really do hate AI, it's not entirely about the layoffs. This is a well insulated bubble but you can find tons of anti-AI forums online.
Yeah, as a gamer I get a lot of game news in my feeds. Apparently there's a niche of indie games that claim to be AI-free. [0]
And I read a lot of articles about games that seem to love throwing a dig at AI even if it's not really relevant.
Personally, I can see why people dislike Gen AI. It takes people's creations without permission.
That being said, morality of the creation of AI tooling aside, there are still people who dislike AI-generated stuff. Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it. In my experience with playing with comfy ui to generate images, it's really easy to get something half decent, it's really hard to get something very high quality. It really is a skill in itself, but people who hate AI think it's just type a prompt and get image. I've seen workflows with 80+ nodes, multiple prompts, multiple masks, multiple loras, to generate one single image. It's a complex tool to learn, just like photoshop. Sure you can use Nano-Banana to get something but even then it can take dozens of generations and prompt iterations to get what you want.
[0] https://www.theverge.com/entertainment/827650/indie-develope...
>morality of the creation of AI tooling aside,
That's a big aside
>Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it.
Yes, because for some people its about supporting human creation. Finding out it's part of a grift to take from said humans can be infuriating. People don't want to be a part of that.
Most of the people that dislike genAI would have the exact same opinion if all the training data was paid for in full (whatever a fair price would be for what is essentially just reference material)
Outside of tech, I think the opinion is generally negative. AI has lost a lot the narrative due to things like energy prices and layoffs.
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
ChatGPT for the common folk is used in the same way PirateBay is. Something can be "popular" and also "bad"
The argument was that common folk see it as "bad" which is clearly not the case.
Go express a pro-AI opinion or link a salient, accurate AI output on reddit, and watch the downvotes roll in.
Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.
Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.
Globally, the opinion isn't generally negative. It's localized.
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
I don't see the correlation between technology/internet use and man-on-the-street attitudes towards AI. Compare Sweden with Japan.
> Some people really do hate AI
That's probably me for a lot of people. The reality is a bit finer than this namely :
- I hate VC funded AI which is actually super shallow (basically OpenAI/Claude wrappers)
- I hate VC funded genuine BigAI that sells itself as the literal opposite of what it is, e.g. OpenAI... being NOT open.
- I hate AI that hides it's ecological cost. Generating text, videos, etc is actually fascinating, but not if making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
- I hate AI that hides it's human cost, namely using cheap labor from "far away" where people have to label atrocities (murders, rape, child abuse, etc) without being provided proper psychological support.
- I hate AI that embodies capitalist principles of exploitation. If somehow your entire AI business relies on an entire pyramid of everything listed above to capture a market then hike the price once dependency is entrenched you might be a brilliant business man but you suck as a human being.
etc... I could go on but you get the idea.
I do love open source public AI research though. Several of my very good friends are researchers in universities working on the topic. They are smart, kind and just great human beings. Not fucking ghouls riding the hype with 0 concern for our World.
So... yes maybe AI haters have a slightly more refined perspective but of course when one summarize whatever text they see in 3 words via their favorite LLM, it's hard to see.
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.
I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.
I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.
The craft isn't any less scarce. If anything, only more. The craft of building wooden furniture is just as scarce as ever, despite the existence of Ikea.
Which is the only woodworkers that survive are the ones with enough customers willing to pay premium prices for furniture, or lucky to live in countries where Ikea like shops aren't yet a thing.
They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.
I mean, I would still hate to be replaced by some chat bot (without being fairly compensated because, societally, it's kind of a dick move for every company to just fire thousands of people and then nobody can find a job elsewhere), but I wouldn't be as mad if the damn tools actually worked. They don't. It's one thing to be laid off, it's another to be laid off, ostensibly, to be replaced by some tool that isn't even actually thinking or reasoning, just crapping out garbage.
And I will not be replying to anyone who trots out their personal AI success story. I'm not interested.
The tech works well enough to function as an excuse for massive layoffs. When all that is over, companies can start hiring again. Probably with a preference for employees that can demonstrate affinity with the new tools.
I don’t think people hate models. They hate that techbros are putting LLMs in places they don’t belong … and then trying to anthropomorphize the thing finding what best rhymes with your prompt as “reasoning” and “intelligence” (which it isn’t).
The layoffs are due to tax incentives in the tax cut bills that financially incentivize offshoring work.
> You were a therapy session for her. Her negativity was about the layoffs.
I think there is no "her", the article ends with saying:
> My former coworker—the composite of three people for anonymity—now believes she's [...]
I think it's just 3 different people and they made up a "she" single coworker as a kind of example person.
I don't know, that's my reading at least, maybe I got it wrong.
I hate to be cagey here but I just really don’t want to make anyone’s life harder than it needs to be by revealing their identity. Microsoft is a really tough place to be an employee right now.
>FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
Close. We're in a recession and they are using AI as an excuse for another wave of outsourcing.
>I don't think people hate AI, they hate the hype.
I hate the grift. I hate having it forced on me after refusing multiple times. That's pretty much 90% of AI right now.
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
Thanks for signing up. I’m going to try really hard to open up some beta slots next week so more people can try it. There’s some embarrassingly bad bugs in prod right now…
I was an early employee at a unicorn and I saw this culture take hold once we started hiring from Big Tech talent pools and offering Big Tech comp packages, though before AI hype took off. There's a crazy lack of agency that kicks in for Big Tech folks that's really hard to explain. This feeling that each engineer is this mercenary trying really hard to not get screwed by the internal system.
Most of it is because there's little that ties actual output to organizational outcomes. AI mandates after all are just a way to bluntly for e engineers to use AI, where if you were at a startup or smaller company you would probably organically find how much an LLM helps you where. It may not even help your actual work even if it helps your coworkers. That market feedback is sorely missing from the Big Techs and so hamfisted engineering mandates have to do in order to for e engineers to become more efficient.
In these cases I always try to remind friends that you can always leave a Big Tech. The thing is, from what I can tell, a lot of these folks have developed lifestyle inflation from working in Big Tech and some of their anger comes from feeling trapped in their Big Tech role due to this. While I understand, I'm not particularly sympathetic to this viewpoint. At the end of the day your lifestyle is in your hands.
It’s not only the hype though.
What about the complete lack of morality some (most?) AI companies exhibit?
What about the consequences in the environment?
What about the enshitification of products?
What about the usage of water and energy?
Etc.
Is your "etc." keep repeating the same two points you did in your list of four?
Not to diminish your overall point, but enshittification has been happening well before AI, AI just made it much easier and faster to enshittify everything.
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned
> Engineers don't try because they think they can't.
This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).
A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.
New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.
"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.
One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.
> the other engineers have lost all ambition for anything else
Worse, they've lost all funding for anything else.
Industries are built upon shit people built in their basements, get hacking
I think it should be noted that a garage or basement in California costs like a million dollars.
I am! No one's interested in any of it though...
You need to buy fake stars on github, fake download it 2 millions time a day and ask an AI to spam about it on twitter/linkedin.
Are you sure it refers to that? Why would it later say:
> now believes she's both unqualified for AI work
Why would she believe to be unqualified for AI work if the "Engineers don't try" wasn't about her trying to adopt AI?
ZIRP is gone, and so are the Good Times when any idiot could get money with nothing but a PowerPoint slide deck and some charisma.
That doesn't mean investors have gotten smarter, they've just become more risk averse. Now, unless there's already a bandwagon in motion, it's hard as hell to get funded (compared to before at least).
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
To add another layer to it, the reason execs are foaming at the mouth is because they are hoping to fire the as many people as possible. Including those who implemented whatever AI solution in the first place.
The most ironic part is that AI skills won't really help you with job security.
You touched on some of the reasons; it doesn't take much skill to call an API, the technology is in a period of rapid evolution, etc.
And now with almost every company trying to adopt "AI" there is no shortage of people who can put AI experience on their resume and make a genuine case for it.
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
Hmm, (whatever is in execs' head about) AI appears to amplify the same kind of thinking fallacies that are discussed in the eternal Mythical Manmonth essay, which was written like half a century ago. Funny how some things don't change much...
My experience too. They are so convinced that AI is magical that pushing back makes you look bad.
Then things don't turn out as they expected and you have to deal with a dude thinking his engineers are messing with him.
It's just boring.
It reminds me of how we moved from "mockups" to "wireframes" -- in other words, deliberately making the appearance not look like a real, finished UI, because that could give the impression that the project was nearly done
But now, to your point: they can vibe-code their own "mockups" and that brings us back to that problem
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.
There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.
Punishment eh? Serves them right for being skeptical.
I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.
In between there was LISP, the AI language (and a couple of others).
I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.
In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.
Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.
The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.
I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.
And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.
> but it does quite well with boring stuff that's still a substantial amount of programming.
I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.
I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.
OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.
Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.
This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.
IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.
The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.
I think part of this is that there is no one AI and there is no one point in time.
The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.
I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.
So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.
It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.
>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.
Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?
> simply because the market has never really punished people for being less efficient at their jobs
In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.
Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).
In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.
This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.
I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.
What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.
>there's something real that everyone wants to hitch on to.
Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.
Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.
I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.
And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.
Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.
What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".
>To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems
This feels like a mentality of "a solution trying to find a problem". There's enough actual problems to solve that I don't need to create more.
But sure, the extension of this is "Then they go home and research more usages and see a kerfluffle of legal, community, and environmental concerns". Then decides to not get involved in the politics".
>Either way, the market is going to punish them accordingly.
If you want to punish me because I gave evaluations you disagreed with, you're probably not a company I want to work for. I'm not a middle manager.
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
I would've thought that in 20 years you would have met other devs who do not think like you?
something I enjoy about our line of work is there are different ways to be good at it, and different ways to be useful. I really enjoy the way different types of people make a team that knows its strengths and weaknesses.
anyway, I know a few great engineers who shrug at the agents. I think different types of thinker find engagement with these complex tools to be a very different experience. these tools suit some but not all and that's ok
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction
Or, and stay with me on this, it’s a reaction to the actual experience they had.
I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.
Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.
It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”
AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.
I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
My dismissal I think indicates exhaustion from the additional work I’d need to do to make an LLM write my code, annoyance at its inaccuracies, and disgust at the massive scam and grift that is the LLM influencers.
Writing code via a LLM feels like writing with a wet noodle. It’s much faster and write what I mean, myself, with the terse was and precision of my own thought.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,
I don't understand why people seem so impatient about AI adoption.
AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.
Nobody seemed to give a fuck as long as the problem was solved.
This it getting out of hand.
Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.
> A whole new class of problems just became solvable.
This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.
If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.
So what do you really mean when you say that a new class of problems became solvable?
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)
I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)
FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)
This isn’t “unfair”, but you are intentionally underselling it.
If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.
Edit: lol this forum :)
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right
I AM very impressed, and I DO use it and enjoy the results.
The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.
Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.
But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.
So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.
I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.
I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.
I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.
In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.
I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
The other problem is that if you didn't actually write the first 90% then the second 90% becomes 2x harder since you have to figure out wtf is actually going on.
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.
When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.
Many people also program and have no idea what a giant codebase looks like.
I know I don't. I have never been paid to write anything beyond a short script.
I actually can't even picture what a professional software engineer actually works on day to day.
From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.
It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.
I work on a stock market trading system in a big bank, in Hong Kong.
The code is split between a backend in Java (no GC allowed during trading) and C++ (for algos), a frontend in C# (as complex as the backend, used by 200 traders), and a "new" frontend in Javascript in infinite migration.
Most of the code was made before 2008 but that was the cvs to svn switch so we lost history before that. We have employees dating back 1997 who remembers that platform already existing.
It's made of millions of lines of code, hundreds of people worked on it, it does intricate things in 10 stock markets across Asia (we have no clue how the others in US or EU do, not really at least - it's not the same rules, market vendors, protocols etc)
Sometimes I need to configure new trading robots for random little thing we want to do automatically and I ask the AI the company is shoving down our throat. It is HOPELESS, literally hopeless. I had to write a review to my manager who will never pass it along up the ladder for fear of their response that was absolutely destructive. It cannot understand the code let alone write some, it cannot write the tests, it cannot generate configuration, it cannot help in anything. It's always wrong, it never gets it, it doesn't know what the fuck these 20 different repos of thousands of files are and how they connect to each other, why it's in so many languages, why it's so quirky sometimes.
Should we change it all to make it AI compatible, or give up ? Fuck do I know... When I started working on it 7 years ago coming from little startups doing little things, it took me a few weeks to totally get the philosophy of it all and be productive. It's really not that hard, it's just really really really really large, so you have to embrace certain ways of working (for instance, you'll do bugs, and you'll find them too late, and you'll apologize in post mortems, dont be paralized by it). AIs costing all that money to be so dumb and useless, are disappointing :(
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Or your job isn't what AI is good at?
AI seems really good at greenfield projects in well known languages or adding features.
It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
This is precisely my experience.
Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.
Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.
> It's been pretty awful, IME, at working with less well-known languages
Well, there’s your problem. You should have selected React while you had the chance.
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.
There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.
>AI is better than you at what you aren’t very good at.
Yes, this is better phrased.
> If you haven’t had a mind blown moment with AI yet...
Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.
> Edit: lol this forum :)
Indeed.
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
You know what, this clarifies something for me.
PC, Web and Smartphone hype was based on "we can now do [thing] never done before".
This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.
>> This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
I was doing RPA (robotic process automation) 8 years ago. Nobody wanted it in their departments. Whenever we would do presentations, we were told to never, ever, ever talk about this technology replacing people - it only removes the mundane work so teams can focus more on the bigger scope stuff. In the end, we did dozens and dozens of presentations and only two teams asked us to do some automation work for them.
The other leaders had no desire to use this technology because they were not only fearful of it replacing people on their teams, they were fearful it would impact their budgets negatively so they just quietly turned us down.
Unfortunately, you're right because as soon as this stuff gets automated and you find out 1/3rd of your team is doing those mundane tasks, you learn very quickly you can indeed remove those people since there won't be enough "big" initiatives to keep everybody busy enough.
The caveat was even on some of the biggest automations we did, you still needed a subset of people on the team you were working with to make sure the automations were running correctly and not breaking down. And when they did crash, since a lot of these were moving time sensitive data, it was like someone just stole the crown jewels and suddenly you need two war rooms and now you're ordering in for lunch.
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.
Or hiring a mathematician to calculate what is now done in a spreadsheet.
100%.
"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product
I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane
This right here is the real thing which AI is deployed to upset.
The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.
The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.
That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.
My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.
This sounds a lot like the Marxist concept of alienation: https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
> But moving toward one pole moves you away from the other.
My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.
Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.
I've never worked at Microsoft. However, I do have some experience with the company.
I worked building tools within the Microsoft ecosystem, both on the SQL Server side, and on the .NET and developer tooling side, and I spent some time working with the NTVS team at Microsoft many years ago, as well as attending plenty of Microsoft conferences and events, working with VSIP contacts, etc. I also know plenty of people who've worked at or partnered with Microsoft.
And to me this all reads like classic Microsoft. I mean, the article even says it: whatever you're doing, it needs to align with whatever the current key strategic priority is. Today that priority is AI, 12 years ago it was Azure, and on and on. And, yes, I'd imagine having to align everything you do to a single priority regardless of how natural that alignment is (or not) gets pretty exhausting, and I'd bet it's pretty easy to burn out on it if you're in an area of the business where this is more of a drag and doesn't seem like it delivers a lot of value. And you'll have to dogfood everything (another longtime Microsoft pattern) core to that priority even if it's crap compared with whatever else might be out there.
But I don't think it's new: it's simply part and parcel of working at Microsoft. And the thing is, as a strategy it's often served them well: Windows[0], Xbox, SQL Server, Visual Studio, Azure, Sharepoint, Office, etc. Doesn't always work, of course: Windows Phone went really badly, but it's striking that this kind of swing and a miss is relatively rare in Microsoft's history.
And so now, of course, they're doing it with AI. And, of course, they're a massive company, so there will be plenty of people there who really aren't having a good time with it. But, although it's far from a foregone conclusion, it would not be a surprise for Microsoft to come from behind and win by repeating their usual strategy... again.
[0] Don't overread this: I'm not necessarily saying I'm a huge fan. In fact I do think Windows, at is core, is a decent operating system, and has been for a very long time. On the back end it works well, and I have no complaints. But I viscerally despise Windows 11 as a desktop operating system. That's right: DESPISE. VISCERALLY. AT A MOLECULAR LEVEL.
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
This somewhat reflects my sentiment to this article. It felt very condescending. This "self-limiting beliefs" and the implication that Seattle engineers are less than San Francisco engineers because they haven't bought into AI...well, neither have all the SF engineers.
One interesting take away from the article and the discussion is that there seem to be two kinds of engineers: those that buy into the hype and call it "AI," and those that see it for the fancy search engine it is and call it an "LLM." I'm pretty sure these days when someone mentions "AI" to me I roll my eyes. But if they say, "LLM," ok, let's have a discussion.
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.
> often companies with real products will mix in tidbits of hype
The wealthiest person in the world relies entirely on his ability to convince people to accept hype that surpasses all reason.
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Spot. Fucking. On.
Thank you.
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.
But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
> There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
I've tried implementing features with Claude Code Max and if I had let that go on for a week instead of just a couple of days I would've lost a week's worth of work (it was pretty immediately obvious that it was too slow at doing pretty much everything, and even the slightest interaction with the LLM caused very long round-trips that would add additional time, over and over and over again). It's possible people simply don't do the kind of things I do. On the extreme end of that, had I spent my days making CRUD apps I probably would've thought it was magic and a "game changer"... But I don't.
I actually don't have a problem believing that there are people who basically only need to write 25% of their code now; if all you're doing for work is gluing together libraries and writing boilerplate then of course an LLM is going to help with that, you're probably the 1000th person that day to ask for the same thing.
The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.
P.S.:
I have found that very short completions, 1-3 lines, is a lot more productive for me personally than any kind of "generate this feature", or even function-sized generation. The reason is likely that LLMs just suck at the things I do, but they can figure out that a pattern exists in the pretty immediate context and just spit out that pattern with some context clues nearby. That remains my best experience with any and all LLM-assisted coding. I don't use it often because we don't allow LLMs for work, but I have a keybind for querying for a completion when I do side projects.
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.
so, people with experience?
Obviously. Turns out experience can be self-limiting in the face of paradigm-shifting innovation.
In hindsight it makes sense, I’m sure every major shift has played out the same way.
> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.
It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.
Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
I'd argue even dogshit has more practical use than Bitcoin, if no one paid money for Bitcoin. You can throw it for self-defence, compost it (under high heat to kill the germs), put it on your property to scare away raccoons (it works sometimes).
Bitcoin and other crypto coins have a practical use. You can use them to buy whatever is being sold on the darkweb with the main product categories being drugs and guns. I honestly believe much of the valuation of Crypto is tied to these marketplaces.
And by "dog feces," I assume you mean fiat currency, correct?
Cryptocurrency solves the money-printing problem we've had around the world since we left the gold standard. If governments stopped making their currencies worthless, then bitcoin would go to zero.
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.
Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.
Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.
Technically stagnant is a good thing; I'd prefer the term technically mature. It's accomplished what it set out to do, which is to be a decentralized, anonymous form of digital currency.
The only thing that MIGHT kill it is if governments stopped printing money.
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.
The values of bitcoin are:
- easy access to trading for everyone, without institutional or national barriers
- high leverage to effectively easily borrow a lot of money to trade with
- new derivative products that streamline the process and make speculation easier than ever
The blockchain plays very little part in this. If anything it makes borrowing harder.
I agree with "easy access to trading for everyone, without institutional or national barriers"
how on earth does bitcoin have anything to do with borrowing or derivatives?
in a way that wouldn't also work for beanie babies
Those are the main innovations tied to crypto trading. They do indeed have little to do with the blockchain or bitcoin itself, and do apply to any asset.
There are actually several startups whose pitch is to bring back those innovations to equities (note that this is different from tokenized equities).
If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.
Uh… So the argument here is that anticipated future value == meaningful value today?
The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
Not OP but for starters LLMs != AI
LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI
>Not OP but for starters LLMs != AI
Please don't do this, make up your own definitions.
Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.
In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.
Why then there is an AI-powered dishwasher, but no AI car?
I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.
Cannot find any mention of AI there.
Also it's funny how they add (supervised) everywhere. It looks like "Full self driving (not really)"
Yes one needs some awareness of the technology. Computer vision: unambiguously AI, motion planning: there are classical algorithms but I believe tesla / waymo both use NNs here too.
Look I don't like the advertising of FSD, or musk himself, but we without a doubt have cars using significant amounts of AI that work quite well.
It's because nobody was trying to take video game behavior scripts and declare them the future of all things technology.
Ok? I'm not going to change the definition of a 70 year old field because people are annoyed at chatgpt wrappers.
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
Shells around chatgpt are fine if they provide value.
Way better than AI jammed into every crevice for no reason.
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
As a layoff justification and a hurryup tool, it is pretty loathesome. People use their jobs for their housing, food, etc.
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy. FWIW, humans derive a lot of their self-evaluation as people from labor.
Marx was correct in his identification of the problem (the communist manifesto still holds up today). Marx went off the rails with his solution.
Getting everyone to even agree that this is a problem is impossible. I'm open to the universe of solutions, as long as it isn't "Anthropic and OpenAI get another $100 billion dollars while we starve". We can probably start there.
It's a problem, it's just not the root problem.
The root problem is nepo babies.
Whether it's capitalism or communism or whatever China has currently - it's all people doing everything to give their own children every unfair advantage and lie about it.
Why did people flee to America from Europe? Because Europe was nepo baby land.
Now America is nepo baby land and very soon China will be nepo baby land.
It's all rather simple. Western 'culture' is convincing everyone the nepo babies running things are actually uber experts because they attended university. Lol.
Yeah, unfortunately Marx was right about people not realizing the problem, too. The proletariat drowns in false consciousness :(
In reality, the US is finally waking up to the fact that the "golden age" of capitalism in the US was built upon the lite socialism of the New Deal, and that all the bs economic opinions the average american has subscribed to over the past few decades was completely just propaganda and anyone with half a brain cell could see from miles away that since reagonomics we've had nothing but a system that leads to gross accumulation to the top and to the top alone and this is a sure fire way (variable maximization) in any complex system to produce instability and eventual collapse.
> humans derive a lot of their self-evaluation as people from labor.
We're conditioned to do so, in large part because this kind of work ethic makes exploitation easier. Doesn't mean that's our natural state, or a desirable one for that matter.
"AI-based economy" is too broad a brush to be painting with. From the Marxist perspective, the question you should be asking is: who owns the robots? and who owns the wealth that they generate?
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
Definitely, AI sentiment is positive among most people at the small startup I work at in the Seattle area. I do see the "AI fatigue" too, I bet the majority is from using AI as a repeated layoff rationalization. Personally AI is a tool, one of the more useful ones (e.g. Claude and Gemini thinking models make quite helpful code reviewers once given a checklist) The hype often overshadows these benefits.
I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.
I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.
I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.
Call me glass half full... but maybe it's because I don't live in Seattle
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread
Yep.
I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way.
The same way people love to think they’re cooler than the masses by hating [famous pop artist]. “But that’s not real music!” they cry.
And that’s fine. Frankly, most of my AI skeptic friends are missing out on a skill that’s helped me a fair bit in my day to day at work and casually. Their loss.
Like it or not, LLMs are here to stay. The same way social media boomed and was here to stay, the same way e-commerce boomed and was here to stay… there’s now a whole new vertical that didn’t exist before.
Of course there will be washouts over time as the hype subsides, but who cares? LLMs are still wicked cool to me.
I don’t even work in AI, I just think they’re fascinating. The same way it was fascinating to me when I made a computer say “Hello, world!” for the first time.
From the article:
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
In Swedish the G wouldn't be silent so it wouldn't really be all that much like "wonderful"; "vanderfugel" is the closest thing I could come up with for how I'd pronounce it with some leniency.
Same in Danish FWIW.
In English, I’d pronounce it very similar to “wonderful”.
If OP dropped the g, it would be a MUCh better product name.
this would make it even closer to the dangerously similar travel planning app "wanderlog"
Solid advice. Seeing how many here would pronounce it differently, I totally agree hahah
The weird thing is that half of the uses of the name on that landing page spell it as "Wanderfull". All of the mock-up screencaps use it, and at the bottom with "Be one of the first people shaping Wanderfull" etc.
So even the creator can't decide what to call it!
AI probably generated all of that and the OP didn't even review its output.
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
If there is a g in there I will pronounce a g there. I have some standards and that is one. Pronouncing every single letter.
> Pronouncing every single letter.
Now I want to know how you pronounce words like: through, bivouac, and queue.
It's pronounced wanderfull in Norwegian
And how many of your users are going to have Nordic backgrounds?
I personally thought it was wander _fughel_ or something.
Let alone how difficult it is to remember how to spell it and look it up on Google.
The one current paying user of the app I've seen in this discussion called it "Wanderlog". FYI on the stickiness of the current name.
Just FYI, I would read it out loud in English as “wander fuggle”. I would assume most Americans would pronounce the ‘g’.
I thought ‘wanderfugl’ was a throwback to ~15 years ago when it was fashionable to use a word but leave out vowels for no reason, like Flickr/ /Tumblr/Scribd/Blendr.
And if you manage to say it outloud, say it to someone else and ask them to spell it. If they can’t spell it, they can’t type it into the url bar.
I think the more pressing advice here is, limit yourself to one name (https://wanderfugl.com/images/guides.png)
this must be one of the incredible AI innovations the folks in Seattle are missing out on
Maybe that's why they didn't go with the English cognate i.e. Wanderfowl, since being foul isn't great branding
What? You don't want travel tips from an itinerant swinger? Or for itinerant swingers?
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
Frankly, tech deserves its bad reputation in SF (and worldwide, really).
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
I don't agree with any of this. I just think it's aggravating to live in a company town.
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
it's plenty sane to be angry when the benefits of those technical innovations are not distributed equally.
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
The plough also made the rich richer, but in the long run the productivity gains it enabled drove improvements to common living standards.
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
EDIT: Removed part of my post that pissed people off for some reason. shrug
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
The claim I was responding to implied that non-techies distinctively hate AI. You're a techie.
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.
Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
> health and safety seems irrelevant to me
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?
It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.
> Do you think the industry will stop because of your concern?
I’m not sure what this question is addressing. I didn’t say it needs to “stop” or the industry has to respond to me.
> It's lovely that you care about health,
1) you should care too, 2) drop the patronizing tone if you are actually serious about having a conversation.
From my PoV you are trolling with virtue signalling and thought terminating memes.. You don't want to discuss why every(?) technological introduction so far has ignored priorities such as your sentiments and any devil's adovocate must be the devil..
The members of HN are actually a pretty strongly biased sample towards people who get the omelet when the eggs get broken.
I for one have no idea what you mean by health and safety with respect to AI. Do you have an OSHA concern?
I have an “enabling suicidal ideation” concern for starters.
To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.
I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.
All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.
If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
As someone on a team with a less stringent code review culture, AI generated code creates more work when used indiscriminately. Good enough to get approved but full of non-obvious errors that cause expensive rework which only gets prioritized once the shortcomings become painfully obvious (usually) months after the original work was “completed” and once the original author has forgotten the details, or worse, left the team entirely. Not to say AI generated code is not occasionally valuable, just not for anything that is intended to be correct and maintainable indefinitely by other developers. The real challenge is people using AI generated code as a mechanism to avoid fully understanding the problem that needs to be solved.
Exactly it’s the non-obvious errors that are easy to miss—doubly so if you are just scanning the code. Those errors can create very hard to find bugs.
So between the debugging and many times you need to reprompt and redo (if you bother at all, but then that adds debugging time) is any time actually saved?
I think the dust hasn’t settled yet because no one has shipped mostly AI generated code for a non-trivial application. They couldn’t have with its current state. So it’s still unknown whether building on incredibly shaky ground will actually work in real life (I personally doubt it).
> and I've even seen unit tests > that mock the actual function > under test.
Yup. Ai is so fickle it’ll do anything to accomplish the task. But ai is just a tool it’s all about what you allow it to do. Can’t blame ai really.
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
> In fairness I’ve seen humans make that mistake
These were (formerly) not the kinds of humans who regularly made these kinds of mistakes.
Leverage.
Those slops already existed, but AI scales them by an order of magnitude.
I guess the same can be said of any technology, but AI is just a more powerful tool overall. Using languages as an example - lets say duck typing allowed a 10% productivity boost, but also introduced 5% more mistakes/problems. AI (claims to) allow a 10x productivity boost, but also ~10x mistakes/problems.
If a tool makes it easy to shoot yourself in the foot, then it's not a good tool. See C++.
Most tools are dangerous in the hands of the inept or the careless. Don’t run with scissors.
I'm no apologist but this statement doesn't ring for me. It's easy to shock yourself with electricity, is it a bad tool?
Electricity isn't a tool, it's nature. An unenclosed electrical plug which you had to be really careful when handling would be a bad tool, yes.
A tool is something designed by humans. We don't get to design electricity, but we do get to design the systems we put in place around it.
Gravity isn't a tool, but stairs are, and there are good and bad stairs.
I've had Claude try to pull the same trick on me just yesterday. It will also try to cheat and apply a "fix" that just masks the real problem.
As another Seattle SWE, I'll go against the grain and say that I think AI is going to change the nature of the market for labor for SWE's and my guess would be for the negative. People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here. If you were to just judge by the sentiment on HN, you would think no coder worth their weight was using this in the real world—but my experience on a few teams over the last two years has been exactly the opposite—people are often embarrassed to admit it but they are using it all the time. There are many engineers at Meta that "no longer code" by hand and do literally all of their problem solving with AI.
I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.
Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.
I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.
This has been exactly my mindset as well (another Seattle SWE/DS). The baseline capability has been improving and compounding, not getting worse. It'd actually be quite convenient if AI's capabilities stayed exactly where they are now; the real problems come if AI does work.
I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line. But I'm not sure what's left if white collar work and creative work are automated en masse for "efficiency's" sake. Most folks like feeling like they're contributing towards something, despite some people who would rather do nothing.
To me it is clear that this is going to have negative effects on SWE and DS labor, and I'm unsure if I'll have a career in 5 years despite being a senior with a great track record. So, agreed. Save what you can.
> the real problems come if AI does work.
Exactly. For example, what happens to open source projects where developers don't have access to the latest proprietary dev tools? Or, what happens to projects like Spring if AI tools can generate framework code from scratch? I've seen maven builds on Java projects that pull in hundreds or even thousands of libraries. 99% of that code is never even used.
The real changes to jobs will be driven by considerations like these. Not saying this will happen but you can't rule it out either.
edit: Added last sentence.
> It'd actually be quite convenient if AI's capabilities stayed exactly where they are now
That's what Im' crossing my fingers at, makes our job easier, but doesn't degrade our worth. It's the best possible outcome for devs.
I keep getting blown away by AI (specifically Claude Code with the latest models). What it does is literally science fiction. If you told someone 5 years ago that AI can find and fix a bug in some complex code with almost zero human intervention nobody would believe you, but this is the reality today. It can find bugs, it can fix bugs, it can refactor code, it can write code. Yes, not perfect, but with a well organized code base, and with careful prompting, it rivals humans in many tasks (certainly outperforms them in some aspects).
As you're also saying this is the worst it will ever be. There is only one direction, the question is the acceleration/velocity.
Where I'm not sure I agree is with the perception this automatically means we're all going to be out of a job. It's possible there would be more software engineering jobs. It's not clear. Someone still has to catch the bad approaches, the big mistakes, etc. There is going to be a lot more software produced with these tools than ever.
> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.
This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.
> code generation today is the worst that it ever will be, and it's only going to improve from here.
I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.
> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.
Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.
But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.
Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.
I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.
I think whether you are right or wrong it makes sense to hedge your bets. I suspect many people here are feeling some sense of fear (career, future implications, etc); I certainly do on some of these points and I think that's a rational response to be aware of the risk of the future unknown.
In general I think -> if I was not personally invested in this situation (i.e. another man on the street) what would be my immediate reaction to this? Would I still become a software engineer as an example? Even if it is doesn't come to past, given what I know now, would I take that bet with my life/career?
I think if people were honest with themselves sadly the answer for many would probably be "no". Most other professions wouldn't do this to themselves either; SWE is quite unique in this regard.
> I mean, they've made all of the progress up to now in essentially the last 5 years
I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.
We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be
I've been reading this since 2023 and yet it hasn't really improved all that much. The same things are still problems that were problems back then. And if anything the improvement is slowing down, not speeding up.
I suspect unless we have real AGI we won't have human-level coding from AIs.
It has improved drastically, as evident by the kinds of issues these things can do with minimal supervision now.
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
Your coworkers were probably writing subtle bugs before AI too.
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
… while not having a real distinction between flies and non-fly ingredients.
No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.
You don’t get to fix bugs in code by simply pouring it through a filter.
I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.
Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.
Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.
Who are we speeding up, exactly?
Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.
It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.
I wonder if there’s a way to measure the cost of such code and associate it with the individuals incurring it. Unless this shows on reports, managers will continue believing LLMs are magic time saving machines writing perfect code.
Pretty much. Someone on our team put out a code review for some new feature and then bounced for a 2 week vacation. One of our junior engineers approved it. Despite the fact that it was in a section of dead code that wasn’t supposed to even be enabled yet, it managed to break our test environment. Took senior engineers a day to figure out how that was even possible before reverting. We had another couple engineers take a look to see what needs to be done to fix the bug. All of them came away with the conclusion that it was 1,000 lines of pure AI-generated slop with no redeemable value. Trying to fix it would take more work than just re-implenting from scratch.
> One of our junior engineers approved it.
pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
so your process seems inadequate to me, agents or not.
also, was it tagged as generated? that seems like an obvious safety feature. As a junior, I might be thinking: 'my senior colleague sure knows lots of this stuff', but all it would take to dispel my illusion is an agent tag on the PR.
> pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
Yeah that’s what I think we need to enforce. To answer your question, it was not tagged as AI generated. Frankly, I think we should ban AI-generated code outright, though labeling it as such would be a good compromise.
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
If AI replaces software engineers, people outside tech doesn't have much chance of surviving it too.
Exactly. I think it’s pretty clear that software engineering is an “intelligence complete” problem. If you can automatically solve SWE than you can automatically solve pretty much all knowledge work.
A lot of modern corporate work is bullshit work.
I don't think it is too outrageous to believe that LLMs can do a lot of what all those armies of corporate bureaucrats do.
The difference is that unlike SWEs, the people doing all that bullshit work are much better at networking, so they will (collectively) find a reason why they shouldn't be replaced with AI and push it through.
SWEs could do so as well, if only we were unionized.
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.
-206dev
People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
I've generally found an inverse correlation between "understands AI" and "exuberance for AI".
I'm the only person at my current company who has had experience at multiple AI companies (the rest have never worked on it in a production environment, one of our projects is literally something I got paid to deliver customers at another startup), has written professionally about the topic, and worked directly with some big names in the space. Unsurprisingly, I have nothing to do with any of our AI efforts.
One of the members of our leadership team, who I don't believe understands matrix multiplication, genuinely believes he's about to transcend human identity by merging with AI. He's publicly discussed how hard it is to maintain friendship with normal humans who can't keep up.
Now I absolutely think AI is useful, but these people don't want AI to be useful they want it to be something that anyone who understands it knows it can't be.
It's getting to the point where I genuinely feel I'm witnessing some sort of mass hysteria event. I keep getting introduced to people who have almost no understanding of the fundamentals of how LLMs work who have the most radically fantastic ideas about what they are capable of on a level I have ever experienced in my fairly long technical career.
Personally, I don't understand how LLMs work. I know some ML math and certainly could learn, and probably will, soon.
But my opinions about what LLMs can do are based on... what LLMs can do. What I can see them doing. With my eyes.
The right answer to the question "What can LLMs do?" is... looking... at what LLMs can do.
I'm sure you're already familiar with the ELIZA effect [0], but you should be a bit skeptical of what you are seeing with your eyes, especially when it comes to language. Humans have an incredible weakness to be tricked by language.
You should be doubly skeptically ever since RLHF has become standard as the model has literally been optimized to give you answers you find most pleasing.
The best way to measure of course is with evaluations, and I have done professional LLM model evaluation work for about 2 years. I've seen (and written) tons of evals and they both impress me and inform my skepticism about the limitations of LLMs. I've also seen countless times where people are convinced "with their eyes" they've found a prompt trick that improves the results, only to be shown that this doesn't pan out when run on a full eval suite.
As an aside: What's fascinating is that it seems our visual system is much more skeptical, an eyeball being slightly off created by a diffusion model will immediately set off alarms where enough clever word play from an LLM will make us drop our guard.
Interesting observation about the visual system. Truth be told, we get the visual feedback about the world at a much higher data rat AND the visual about the world is usually much higher correlated with reality, whereas the language is a virtual byproduct of cognition and communication.
We get around this a bit when using it to write code since we have unit tests and can verify that it's making correct changes and adhering to an architecture. It has truly become much more capable in the last year. This technology is so flexible that it can be used in ways no eval will ever touch and still perform well. You can't just rely on what the labs say about it, you have to USE it.
Spot on in my experience.
I work in a space where I get to build and optimise AI tools for my own and my team's use pretty much daily. As such I focus mainly on AI'ing the crap out of boring & time-consuming stuff that doesn't interest any of us any more, and luckily enough there's a whole lot of low hanging fruit in that space where AI is a genuine time, cost and sanity saver.
However any activity that requires directed conscious thought and decision making where the end state isn't clearly definable up front tends to be really difficult for AI. So much of that work relies on a level of intuition and knowledge that is very hard to explain to a layman - let alone eidetic idiots like most AIs.
One example is trying to get AI to identify security IT incidents in real time and take proactive action. Skilled practitioners can fairly easily use AI to detect anomalous events in near real time, but getting AI to take the next step to work out which combinations of "anomalous" activities equate to "likely security incident" is much harder. A reasonably competent human can usually do that relatively quickly, but often can't explain how they do it.
Working out what action is appropriate once the "likely security incident" has been identified is another task that a reasonably competent human can do, but where AIs are hopeless. In most cases, a competent human is WAAAY better at identifying a reasonable way forward based on insufficient knowledge. In those cases, a good decision made quickly is preferable to a perfect decision made slowly, and humans understand this fairly intuitively.
> I've generally found an inverse correlation between "understands AI" and "exuberance for AI".
Few years ago I had this exact observation regarding self driving cars. Non/semi engineers who worked in the tech industry were very bullish about self driving cars, believing every and ETA spewed by Musk, engineers were cautious optimistically or pessimistically depending on their understanding of AI, LiDAR, etc.
This completely explains why so many engineers are skeptical of AI while so many managers embrace it: The engineers are the ones who understand it.
(BTW, if you're an engineer who thinks you don't understand AI or are not qualified to work on it, think again. It's just linear algebra, and linear algebra is not that hard. Once you spend a day studying it, you'll think "Is that all there is to it?" The only difficult part of AI is learning PyTorch, since all the AI papers are written in terms of Python nowadays instead of -- you know -- math.)
I've been building neural net systems since the late 1980s. And yes they work and they do useful things when you have modern amounts of compute available, but they are not the second coming of $DEITY.
Linear algebra cannot be learned in a day. Maybe multiplying matrices when the dimensions allow but there is far more to linear algebra than knowing how to multiply matrices. Knowing when and why is far more interesting. Knowing how to decompose them. Knowing what a non-singular matrix is and why it’s special and so on. Once you know what’s found in a basic lower devision linear algebra class, one can move it linear programming and learn about cost functions and optimization or numerical analysis. PyTorch is just a calculator. If I handed someone a Ti-84 they wouldn’t magically know how to bust out statistics on it…
> This completely explains why so many engineers are skeptical of AI while so many managers embrace it: The engineers are the ones who understand it.
Curiously some Feynman chap reported that several NASA engineers put the chance of the Challenger going kablooie—an untechnical term for rapid unscheduled deconstruction, which the Challenger had then just recently exhibited—at 1 in 200, or so, while the manager said, after some prevarications—"weaseled" is Feynman's term—that the chance was 1 in 100,000 with 100% confidence.
I mostly disagree with this. Lots of things correlate weakly with other things, often in confusing and overlapping ways. For instance, expertise can also correlate with resistance to change. Ego can correlate with protection of the status quo and dismissal of people who don't have the "right" credentials. Love of craft can correlate with distaste for automation of said craft (regardless of the effectiveness of the automation). Threat to personal financial stability can correlate with resistance (regardless of technical merit). Potential for personal profit can correlate with support (regardless of technical merit). Understanding neural nets can correlate both with exuberance and skepticism in slightly different populations.
Correlations are interesting but when examined only individually they are not nearly as meaningful as they might seem. Which one you latch onto as "the truth" probably says more about what tribe you value or want to be part of than anything fundamental about technology or society or people in general.
I think there is a correlation between when you can you expect from something when I know their internals vs someone that doesn’t know but is not like who knows internals is much much better.
Example: many people created websites without a clue of how they really work. And got millions of people on it. Or had crazy ideas to do things with them.
At the same time there are devs that know how internals work but can’t get 1 user.
pc manufacturers never were able to even imagine what random people were able to do with their pc.
This to say that even if you know internals you can claim you know better, but doesn’t mean it’s absolute.
Sometimes knowing the fundamentals it’s a limitation. Will limit your imagination.
I'm a big fan of the concept of 初心 (Japanese: Shoshin aka "beginners mind" [0] ) and largely agree with Sazuki's famous quote:
> “In the beginner’s mind there are many possibilities, but in the expert’s there are few”
Experts do tend to be limited in what they see as possible. But I don't think that allows carte blanche belief that a fancy Markov Chain will let you transcend humanity. I would argue one of the key concepts of "beginners mind" is not radical assurance in what's possible but unbounded curiosity and willingness to explore with an open mind. Right now we see this in the Stable Diffusion community: there are tons of people who also don't understand matrix multiplication that are doing incredible work through pure experimentation. There's a huge gap between "I wonder what will happen if I just mix these models together" and "we're just a few years from surrendering our will to AI". None of the people I'm concerned about have what I would consider an "open mind" about the topic of AI. They are sure of what they know and to disagree is to invite complete rejection. Hardly a principle of beginners mind.
Additionally:
> pc manufacturers never were able to even imagine what random people were able to do with their pc.
Belies a deep ignorance of the history of personal computing. Honestly, I don't think modern computing has still ever returned to the ambition of what was being dreampt up, by experts, at Xerox PARC. The demos on the Xerox Alto in the early 1970s are still ambitious in some senses. And, as much as I'm not a huge fan, Gates and Jobs absolutely had grand visions for what the PC would be.
I think this is what is blunted by mass education and most textbooks. We need to discover it again if we want to enjoy our profession with all the signals flowing from social media about all the great things other people are achieving. Staying stupid and hungry really helps.
I think this is more about mechanistic understanding vs fundamental insight kind of situation. The linear algebra picture is currently very mechanistic since it only tells us what the computations are. There are research groups trying to go beyond that but the insight from these efforts are currently very limited. However, the probabilistic view is very much clearer. You can have many explorable insights, both potentially true and false, by jıst understanding the loss functions, what the model is sampling from, what is the marginal or conditional distributions are and so on. Generative AI models are beautiful at that level. It is truly mind blowing that in 2025, we are able to sample from the megapixel image distributions conditioned on the NLP text prompts.
If were true then people could predict this AI many years ago
If you dig ml/vision papers from old, you will see that formulation-wise they actually did, but they lacked the data, compute, and the mechanistic machinery provided by the transformer architecture. The wheels of progress are slow and requires many rotations to finally reach somewhere.
It's definitely interesting to look at people's mental models around AI.
I don't know shit about the math that makes it work, but my mental model is basically - "A LLM is an additional tool in my toolbox which performs summarization, classification and text transformation tasks for me imperfectly, but overall pretty well."
Probably lots of flaws in that model but I just try to think like an engineer who's attempting to get a job done and staying up to date on his tooling.
But as you say there are people who have been fooled by the "AI" angle of all this, and they think they're witnessing the birth of a machine god or something. The example that really makes me throw up my hands is r/MyBoyfriendIsAI where you have women agreeing to marry the LLM and other nonsense that is unfathomable to the mentally well.
There's always been a subset of humans who believe unimaginably stupid things, like that there's a guy in the sky who throws lightning bolts when he's angry, or whatever. The interesting (as in frightening) trend in modernity is that instead of these moron cults forming around natural phenomena we're increasingly forming them around things that are human made. Sometimes we form them around the state and human leaders, increasingly we're forming them around technologies, in line with Arthur C. Clarke's third law - that "Any sufficiently advanced technology is indistinguishable from magic."
If I sound harsh it's because I am, we don't want these moron cults to win, the outcome would be terrible, some high tech version of the Dark Ages. Yet at this moment we have business and political leaders and countless run-of-the-mill tech world grifters who are leaning into the moron cult version of AI rather than encouraging people to just see it as another tool in the box.
Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.
Yes, because they're more likely to understand that the computer isn't this magical black box, and that just because we've made ELIZA marginally better, doesn't mean it's actually good. Anecdata, but the people I've seen be dazzled by AI the most are people with little to no programming experience. They're also the ones most likely to look on computer experts with disdain.
Well yeah. And because when an expert looks at the code chatgpt produces, the flaws are more obvious. It programs with the skill of the median programmer on GitHub. For beginners and people who do cookie cutter work, this can be incredible because it writes the same or better code they could write, fast and for free. But for experts, the code it produces is consistently worse than what we can do. At best my pride demands I fix all its flaws before shipping. More commonly, it’s a waste of time to ask it to help, and I need to code the solution from scratch myself anyway.
I use it for throwaway prototypes and demos. And whenever I’m thrust into a language I don’t know that well, or to help me debug weird issues outside my area of expertise. But when I go deep on a problem, it’s often worse than useless.
This is why AI is the perfect management Rorschach test.
To management (out of IC roles for long enough to lose their technical expertise), it looks perfect!
To ICs, the flaws are apparent!
So inevitably management greenlights new AI projects* and behaviors, and then everyone is in the 'This was my idea, so it can't fail' CYA scenario.
* Add in a dash of management consulting advice here, and note that management consultants' core product was already literally 'something that looks plausible enough to make execs spend money on it'
In my experience (with ChatGPT 5.1 as of late) is that the AI follows a problem->solution internal logic and doesn't think and try to structure its code.
If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.
I feel like the AI has a strong bias towards adding things, and not removing them. The most obviously wrong thing is with CSS - when I try to do some styling, it gets 90% of the way there, but there's almost always something that's not quite right.
Then I tell the AI to fix a style, since that div is getting clipped or not correctly centered etc.
It almost always keeps adding properties, and after 2-3 tries and an incredibly bloated style, I delete the thing and take a step back and think logically about how to properly lay this out with flexbox.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case. > >A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible. > >I feel like the AI has a strong bias towards adding things, and not removing them.
I suspect this is because an LLM doesn't build a mental model of the code base like a dev does. It can decide to look at certain files, and maybe you can improve this by putting a broad architecture overview of a system in an agents.md file, I don't have much experience with that.
But for now, I'm finding it most useful still think in terms of code architecture, and give it small steps that are part of that architecture, and then iterate based on your own review of AI generated code. I don't have the confidence in it to just let some agent plan, and then run for tens of minutes or even hours building out a feature. I want to be in the loop earlier to set the direction.
A good system prompt goes a long way with the latest models. Even just something as simple as "use DRY principles whenever possible." or prompting a plan-implement-evaluate cycle gets pretty good results, at least for tasks that are doing things that AI is well trained on like CRUD APIs.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
I don’t think this is an inherent issue to the technology. Duplicate code detectors have been around for ages. Given an AI agent a tool which calls one, and ask it to reduce duplication, it will start refactoring.
Of course, there is a risk of going too far in the other direction-refactorings which technically reduce duplication but which have unacceptable costs (you can be too DRY). But some possible solutions: (a) ask it to judge if the refactoring is worth it or not - if it judges no, just ignore the duplication and move on; (b) get a human to review the decision in (a); (c) if AI repeatedly makes wrong decision (according to human), prompt engineering, or maybe even just some hardcoded heuristics
It actually is somewhat a limit of the technology. LLMs can't go back and modify their own output, later tokens are always dependent on earlier tokens and they can't do anything out of order. "Thinking" helps somewhat by allowing some iteration before they give the user actual output, but that requires them to write it the long way and THEN refactor it without being asked, which is both very expensive and something they have to recognize the user wants.
Coding agents can edit their own output - because their output is tool calls to read and write files, and so it can write a file, run some check on it, modify the file to try to make it pass, run the check again, etc
Sorry but from where I sit, this is only marginally closes gap from AI to truly senior engineers.
Basically human junior engineers start by writing code in a very procedural and literal style with duplicate logic all over the place because that's the first step in adapting human intelligence to learning how to program. Then the programmer realizes this leads to things becoming unmaintainable and so they start to learn the abstraction techniques of functions, etc. An LLM doesn't have to learn any of that, because they already know all languages and mechanical technique in their corpus, so this beginning journey never applies.
But what the junior programmer has that the LLM doesn't, is an innate common sense understanding of human goals that are driving the creation of the code to begin with, and that serves them through their entire progression from junior to senior. As you point out, code can be "too DRY", but why? Senior engineers understand that DRYing up code is not a style issue, its more about maintainability and understanding what is likely to change, and what will be the apparent effects to human stakeholders who depend on the software. Basically do these things map to things that are conceptually the same for human users and are unlikely to diverge in the future. This is also a surprisingly deep question as perhaps every human stakeholder will swear up and down they are the same, but nevertheless 6 months from now a problem arises that requires them to diverge. At this point there is now a cognitive overhead and dissonance of explaining that divergence of the users who were heretofore perfectly satisfied with one domain concept.
Ultimately the value function for success of a specific code factoring style depends on a lot of implicit context and assumptions that are baked into the heads of various stakeholders for the specific use case and can change based on myriad outside factors that are not visible to an LLM. Senior engineers understand the map is not the territory, for LLMs there is no territory.
I’m not suggesting AIs can replace senior engineers (I don’t want to be replaced!)
But, senior engineers can supervise the AI, notice when it makes suboptimal decisions, intervene to address that somehow (by editing prompts or providing new tools)… and the idea is gradually the AI will do better.
Rather than replacing engineers with AIs, engineers can use AIs to deliver more in the same amount of time