Everyone in Seattle hates AI

jonready.com

784 points by mips_avatar 13 hours ago


p0w3n3d - 32 minutes ago

Someone wrote on HN the (IMO) main reason why people do not accept AI.

  AI is about centralisation of power
So basically, only a few companies that hold on the large models will have all the knowledge required to do things, and will lend you your computer collecting monthly fees. Also see https://be-clippy.com/ for more arguments (like Adobe moving to cloud to teach their model on your work).

For me AI is just a natural language query model for texts. So if I need to find something in text, make join with other knowledge etc. things I'd do in SQL if there was an SQL processing natural language, I do in LLM. This enhances my work. However other people seem to feel threatened. I know a person who resigned CS course because AI was solving algorithmic exercises better than him. This might cause global depression, as we no longer are on the "top". Moreover he went to medicine, where people basically will be using AI to diagnose people and AI operators are required (i.e. there are no threats of reductions because of AI in Public Health Service)

So the world is changing, the power is being gathered, there is no longer possibility to "run your local cloud with open office, and a mail server" to take that power from the giants.

shepardrtc - 13 hours ago

Ok so a few thoughts as a former Seattleite:

1. You were a therapy session for her. Her negativity was about the layoffs.

2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.

3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.

4. I don't think people hate AI, they hate the hype.

Anyways, your app actually does sound interesting so I signed up for it.

assemblyman - 11 hours ago

I am not in Seattle. I do work in AI but have shifted more towards infrastructure.

I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.

Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.

Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.

Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.

I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.

bccdee - 13 hours ago

> Engineers don't try because they think they can't.

This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.

There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.

I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.

So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

groos - 13 hours ago

It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.

paxys - 12 hours ago

All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.

r0m4n0 - 10 hours ago

I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.

I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.

I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.

I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.

Call me glass half full... but maybe it's because I don't live in Seattle

vunderba - 13 hours ago

From the article:

> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.

I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.

somekyle2 - 13 hours ago

Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.

sirreal14 - 13 hours ago

As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.

thorum - 12 hours ago

People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.

I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.

pkasting - 13 hours ago

Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.

I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.

Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)