AI is a technology not a product
daringfireball.net249 points by ch_sm 8 hours ago
249 points by ch_sm 8 hours ago
Agreed.
The ideal implementation of AI for Apple is probably to finally make Siri work. This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.
The most important part of this is it doesn’t necessarily feel like AI. The user does not like AI for its own sake or the weirdos who ramble about putting them into a permanent underclass. The user likes messaging their friends and playing music.
To much of this hype cycle has no user in mind.
Absolutely agreed. It feels like tech companies forgot that they are supposed to add value to users. Theyve been shoving random AI usecases down their users throats with no regard for whether it works for the users flow or not. When theres so much value to be had from AI in normal products. Claude code is the best in this right now, probably because the engineers themselves are users.
This isnt unprecedented, its what happened in the dotcom bubble as well. But then that tech started getting used properly as well. So i think its a matter of time before claude code levels of value is avialable to normal users
>Absolutely agreed. It feels like tech companies forgot that they are supposed to add value to users.
They lost the plot long ago. They're firmly in extraction mode now: how much value can they get from end-users?
> When theres so much value to be had from AI in normal products.
Please elaborate
Fuzzy search
Reverse dictionary
Stack Overflow clone, except you're guaranteed to get an unreliable answer promptly instead of waiting for a human to give it
OCR, with new and exciting failure modes
Machine translation, with new and exciting failure modes
Endless possibilities for exploiting the stupid and ignorant while destroying the web in the process
Note that only the first two are unalloyed good, and they can be done with embeddings without generative AI.
Replace search for one.
Proper search is far more powerful. I can set the weights for the results to various parameters, focusing on certain metadata in particular. Sad to see popular search tools have gone stupid in recent years. But search is still very powerful and Imo ai is no replacement for good search. An example of a powerful search engine is pubmed and the logic you can craft in your queries.
Right. It's deterministic, and determinism should be the goal. It's not metaphysical. Some users know what they want while others do not. The software we create (by any means) should give users who know what they want the tools to find it, and guide those who don't until they do. Software exists to help us create our fate. It surprises me how many people are willing to relinquish that control or never wanted it, even within our ranks, by using AI to simplify experiences. IMHO, the optimization for most, but not perhaps not all, tools is to introduce AI internally to refine, create and expose more parameters, not less. Search is a perfect example of this.
People keep on saying Apple is far behind its competitors on AI. If Apple just waited on their Apple Intelligence announcements about Siri or other features that would have been best. Right now Apple makes money off of any subscriptions through the App Store which is actually profitable compared to the foundational AI companies which are spending trillions to make a technology which everyone will have but no one will expect to pay the cost of making the technology.
> This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.
Isn’t this the proverbial ”faster horse”? Ie let me do exactly what I can do now, in a very slightly different, possibly very slightly more convenient way?
If the user asks for a faster horse and you sell them a car, you win.
If the user asks for a faster horse and you sell them a trebuchet, you lose, no matter how fast the trebuchet would technically get them to their destination.
A car is still a faster horse in the sense that you decide where to go with it. It doesn’t outsource decisions to somewhere else.
(Arguably the car affords you better control than an unruly horse. Self-driving cars are moving us closer to the horse again. ;))
The whole point of AI is that if something different happens, it's not you doing it.
Exactly as a UX person who watched the movie H.E.R. A few times i feel something like is the next UX internet evolution… talking and texting to AI where all visuals need to be seen appear your iPhones lock screen. Siri would in the background communicate with AI agents of businesses to govt organizations to organizations to your friends and family to get things done for you. Lessen the need to unlock your phone and Apples creating AI AirPods just use the iPhones lock screen to create/show the appropriate visuals and text.
As UX / UI professional of 17 years I think design is a dying field the above would kill digital UI design quicker. Yet the UX would be less steps / friction to complete tasks which is the harbinger of UX design…less is more.
On a side note I’m just in medical school studying a mid level Concentration. I don’t foresee a LONG term future in digital design and development much anymore.
Now I’m picturing a dystopia movie. No one knows how to plan any events anymore, some event appears in their calendar and they show up and find some people there matched to their profile. People get silenced from certain events and can’t get back in. Like a personalized music playlist but it’s your entire life. People forget how to organize and create original ideas, and any prospect of revolution becomes as likely as expecting a farmers cattle to rise up.
Could be where things go…UX is all about less is more ..less steps. Time will tell.
> ideal implementation of AI for Apple is probably to finally make Siri work
Wouldn't the simplest solution be to auction off Siri's back end the way Apple does Safari's search bar in iOS?
They're reportedly already doing that, AI services will be able to publish "Extensions" that Siri will use and then those services can compete amongst themselves to power it.
But this is contingent on the same services not being able to replace Siri and being able to reserve its APIs for Apple's exclusive use, and they have a pretty tenuous grasp on that these days.
https://9to5mac.com/2026/05/05/ios-27-will-let-you-choose-be...
No, because what Siri does needs to be tightly integrated in ways that search does not.
They keep banging on Siri hoping for a different outcome is insanity by definition. Voice is actually not a very good UI for most things, it isn't very private, it's prone to mistakenly think I'm talking to it, and is bad for dense info/info organization. Siri should only be activated very very deliberatly, not "Hey siri", and don't make it act like jarvis because you will not in the near future with the smarts it needs.
Do people want that? I mean I don’t think it can discern if I say 15 or 50. Why would I leave that to chance that the ai properly grokked my message when despite what I’m guessing decades of work in the speech to text field, it is still pretty unreliable? Doing the task myself is trivial enough and 100% reliable.
In an ideal world in which LLMs behave as advertised, the idea is that you'd be delegating to the agent what you might otherwise be doing yourself. And just like when delegating to an intern who called you collect from a payphone, you may need to spell out "one-five" when you say 15 to make sure they don't hear 50.
The thing which kills me is a lot of this was working back in the Newton days.
Can you expand on this? Having used two different Newton models, even squinting, I don’t understand what you’re getting at.
On my Newton MessagePad, one could write things such as "Lunch <name> Friday" interact with it using a stylus to activate "Newton Intelligence" and it would create a calendar event for the next Friday, and attach the contact as a link.
I have a grander vision for an ideal Apple “AI”: anti-AI.
I’m picturing a combination of on-board facilities and online services from the Apple cloud that Apple product holders could use to flag and filter LLM slop. As a value added prospect, iPhone users who read HN or used TikTok would be seeing clear UI-level indications of when they’re interacting with slop with options to kill it.
In my estimation it would provide platform benefits without losing capabilities, leverage Apples hardware and not advertising positioning, fix critical issues of spam and scams, and let them market a higher calibre of online experience. Also, they could un-eff Siri - “play album X starting at track Y”, come on, it’s 2026.
Steve already gave away the secret [1] (must watch) a long time ago:
"You have to work backwards from the customer experience."
AI was never going to be on Apple's roadmap in a significant way because it's in their DNA to differentiate technology from products.
"Working backwards" is also, famously, Amazon's philosophy. It's one of my most cherished takeaways from working there.
Once I extracted the medicine from the poison I am very glad that Amazon was my first corporate work experience. Many of the leadership principles and cultural norms there are actually very good ideas when not taken to extremes.
I remember my first meeting I went to at another company that was just a guy talking with a PowerPoint. I couldn’t believe we didn’t have the data or time to ask probing questions. We’re just supposed to take this guy for his word? Crazy
AWS had great culture when I worked there, maybe they still do. The leadership principles are universal and I don't know of any other company that took their principles so seriously.
"You have to work backwards from the customer experience."
answer... ditch phone/screen, just have an earpod you talk to.
I mean, one of the last big things Jobs did was buy Siri and he reportedly saw it as the next big interaction model. Apple just messed it up by letting it get progressively worse ever since.
What was Siri?
A product built on government funded technology (CALO) [1].
SRI -> SiRI Inc.
[1] https://www.sri.com/75-years-of-innovation/75-years-of-innov...
This is a similar argument to "Dropbox is a feature, not a product" and it definitely rings true in this instance too. I remember the litany of applications that only supported sync through Dropbox. It had no ecosystem, it's saving grace was that no one yet was operating a service similar at that scale.
All the major AI companies are trying to manufacture their own ecosystems to become less disposable. They'll get away with it for a while, but only insofar as hardware prevents advanced use. Once we get that hardware[1] there will only be two types of AI companies: hardware manufacturers, and labs. Just like sync became trivial and ancillary, so will AI inference.
and the differentiating factor on hardware will be the seamlessness of the interface, in software. the combination of voice, eye tracking, swiping, capture of intent, being able to mumble to myself at a volume only my device can hear. The hardware needs to be little more than something that gets out of the way and acts as an input device with a battery.
> Apple doesn’t have a social network business.
They don't have a social network business because they tried that and failed. [1]
AI seems to be a product if you're Anthropic (the seller) and any enterprise with a software team (the buyer).
I agree with Gruber's take, if the seller is Apple.
I totally agree - the phone as a form factor is not going away. People are always going to want to have a mobile communicator/computer, and want one with a screen and all-day battery life. The phone is not going to be replaced by smart glasses or some other wearable or screen-less pocket device.
It may well be that the user interface of your "phone", and how you use it, changes over time as we progress toward AGI, but as long as Apple keep to the Job's aesthetic of making well designed products that get out of the way and just "do the thing", they should be fine. Of course Apple will eventually fall, as all companies do, but I don't think the reason for it will be that the "phone" market was rendered obsolete by AI.
Perhaps if phones becomes more of a "pocket assistant" than a device to run discrete apps, then they will becomes harder to differentiate based on software, and more of a generic item rather than a status/luxury one ... who knows? Anyone else have any theories of how Apple may eventually fall?
There is one potential AI risk to Apple, that they are at a disadvantage due to not having their own frontier models and datacenters to run them on, but I think there will always be someone willing to sell them API access, and they will adapt as needed. Good enough AI is only going to get cheaper to train and serve, and Apple not trying to compete in this area may well turn out to have been a great decision, just as Microsoft seem to be doing fine letting OpenAI take all the risk.
> the phone as a form factor is not going away
It's not going away in the next few years. Which means Apple doesn't have to rush to release an AI product for the sake of it à la Giannandrea.
That's really the point of the article. As long as the phone is the (or at least a significant) conduit for our use of AI technology, Apple is in a good spot, and it's the same spot where they have historically done very well.
I think the vision of pocket assistant versus discrete apps is very much Apple. Remember the original iPhone had no app store. The app store is kind of a pain to deal with. If I had to bet, this starts with Apple pivoting Swift Playground into Playground releasing it across all devices. The programming language becomes invisible. The live canvas is the document.
The answer as always in these situations is to zoom out.
We are in the midst of a paradigm shift, and the perspective in the daring fireball post aligns exactly with this author’s perspective:
https://rebecca-powell.com/posts/return-on-intelligence-01-e...
Really enjoyed that article, thanks for the link. I agree there can be a bubble and a genuine paradigm shift at the same time. We're going through our first wave of attempts, more or less wrong, but the general direction is right, that the future will never be the same.
Agree with this article, and I almost threw up in my mouth when I read this quote from Stephen Levy:
> By the end of this decade, it’s unlikely that people will swipe on their phones to tap on Uber or Lyft. They will just tell their always-on AI agent to get them home. Or that agent will have already figured out where they need to go, and the car will be waiting without the friction of a request. “There’s an app for that,” may be replaced by “Let the agent do that.”
Who TF are these people who think this kind of future is desirable? I basically think it's just people that want to broadcast that they're so important and busy that they can't take the 5 seconds it takes to hail an Uber. Its like all that "productivity optimization" porn that people spew online to show how focused they are.
I was reading article recently that said that a majority of people interviewed did not want to use AI agents simply because they didn't have much stuff in their life worth automating. Or more to the point, a lot of people actually enjoy making grocery lists, planning trips, picking out gifts for friends, etc. This stuff is generally considered "life", not some back breaking drudgery like washing clothes in a stream that I'd like to automate.
These folks like Levy who view this dystopian future as some sort of nirvana (and not because they view a different future, they actually want all this nonsense) can go F themselves. You can also tell how incredibly sheltered these people are because you can see they're rarely interacting with people outside their bubble. For example, a lot of people that open the Uber app make their decision based on data in the app, like "surge pricing, nevermind, I'll just walk" or "this looks expensive, let me try Lyft". You could argue an agent could learn all those rules, but again, these minutia of life are not exactly a nuisance to most people.
I'd be more inclined to believe that an abundance of robotaxis will use predictive algorithms to preemptively show up wherever they're likely to be needed, allowing a UX where users can hail them like traditional taxis without an app. Maybe not in four years, but maybe in a decade or two.
That feels both more credible and more desirable than the magic panopticon predicted in the quote, and doesn't really depend on any major technological leaps beyond continued maturation and scaling of Waymo/alternatives.
That's just the experience any executive with a secretary or personal assistant is used to having.
If AI allows more people to have such a premium experience, that's a use of technology that makes a lot more sense than all the "AI will take over your job" scaremongering.
Are people in the habit of asking their admin to order a pizza or a Uber? There’s more complex things (the floor I think is booking a flight that doesn’t conflict with activities I have to do), but by time you summon your assistant you could’ve had the car on its way.
> [...] broadcast that they're so important and busy that they can't take the 5 seconds [...]
It takes a lot more than 5 seconds to make an informed decision these days. Apps and websites are throwing abusive fine print and dark patterns at users left and right.
I'd be absolutely thrilled to e.g. not have to interact with the Uber app and all its dark patterns if there were somebody or something I could trust to competently represent my interests.
That said, that's a big if, i.e., whether commercial LLMs or agents will be able to do that, given the overwhelming pressure to just take money from both sides of the transaction and skew the decision.
But if it does happen, I actually see this as a huge potential factor strengthening smaller suppliers directly competing with large platforms. If my agent can independently figure out if a given supplier is trustworthy, whether their terms and conditions are reasonable etc., I'd be much more willing to engage with them outside of a large platform.
> Who TF are these people who think this kind of future is desirable?
Some of this is weird techno delusion. Some of it is because the people describing it do a poor job of explaining how it might work.
If a couple decades ago someone told you that you’d have an always listening device in your pocket to answer your questions from all the world’s information, it would have sounded dumb, and with the always-listening device, rather dystopian. But that’s what you have assuming you have any modern smart phone.
The “agent knows where you’re going and calls a car for you” sounds dystopian as hell if done totally autonomously. But you could also imagine that an agent pops up a message on your watch “hey, you’ve been at dinner for an hour, if you’re winding down I can call you a car in 15 minutes” and suddenly it’s not that absurd.
Why are you crashing out about this? It's just an example of how AI agents can be a better user experience.
Anything is a product if you can sell it.
This is important to think through, does one have a product, tech, tool, or even just a feature. I given thing is not necessarily at the bottom of this stack, but also not always at the top.
Really depends on the company and who you're selling to. For a car company a tire is a feature, for other companies it's their product.
...in the same way that people used to just accept bulky laptops with terrible batteries, I think people today have become inured to just how annoying it is to get your phone in and out of your pocket. This is why phones get dropped at broken constantly. Phones suck, and I don't think they are the final form factor.
The final final form factor is probably a pair of glasses (or an implant), but I still think that's pretty far away. Before that can happen, we need computer chips and batteries to become almost microscopically small.
For the foreseeable future—still long term, but much closer than glasses—I think the logical form factor is a smartwatch. For photos, it would have an under-screen front-facing camera, and an outward facing camera on the wrist band. The screen would be a bit larger than today's largest Apple watches, and it would fold out like a folding phone when you need more space.
Even unfolded, the screen would have to be smaller than what we're currently used to on smartphones. However, this would be less important if most interaction was done via AI, just as limited-interaction iPods and Blackberries never commanded massive screens. People who want to watch movies, read longer books, or play games on larger screens could still carry folding tablets in their pockets on some occasions, but the watch would be the central device everyone always has.
Apple, of course, already makes smartwatches, arguably the best ones on the market. But an Apple Watch is very much not the device I'm describing, and I'm not sure if Apple will let it get there. Apple is stuck in the innovator's dilemma, where the iPhone prints so much money they can't afford to cannibalize it. For the moment, the iPhone has been so good that this hasn't caught up to them. I think—and for the sake of innovation, I hope—that this doesn't last forever.
all technologies are also products
For those that care, Gruber (author of this blog), said the following about news about the Genocide in Palestine:
Quote tweeting a NYTimes post detailing war crimes "As Israeli forces entered Gaza on Friday to fight Hamas, phone and internet service was severed for 34 hours. Most people in Gaza had no way to reach the outside world..."
Gruber wrote "F*k around and find out."
Quote tweeting a post by the UN Human Rights account about Israel's flooding of tunnels with saltwater could have severe adverse human rights impacts,
Gruber wrote "One side is pumping salt water into the tunnels. The other side has put innocent civilian women and children hostages in the tunnels. Also: "salt water" has a space when used as a noun"
Quote tweeting a post by a StopAntisemitism page that posted about 'pro-Palesinian agitators showed up to secreteary of Defence Lloyd Austin's home..."
Gruber wrote "These people are surely a lot of fun at parties"
Gruber is a big fan of collective punishment, it seems. But at least he's very specific about the use of grammar.
If capable humanoid robots are really closer than most people think, I'd be surprised if Apple isn't exploring them. That may be the counterexample to "AI is not a product": a physical AI product where hardware, sensors, UX, privacy, and integration matter as much as the model.
In that case the robot as a whole is the product and the model is just a part of the technology making it possible.
That’s the thing; the LLM itself - the chat window - can’t be the whole product for an industry. It’s a technology that you build things with.
Why is every consumer hardware company sleeping on AI? The best product is Openclaw and it is embarrassing.
Today I wanted to book a public transport ticket in Germany but it was simply too hard to keep copy pasting screenshots from the app to ChatGPT. This seems to be a very easy problem to solve and standardise at the OS level but no one seems to want to do it.
I agree its not a totally different "product" but does require some thought. Apple can't sleep on this.
Everybody wants to do it, but doing it in a way that's survivable to a company with a brand image to preserve and potential legal liability for the consequences is not nearly as easy.
[dead]
It’s a lot of noise out there. That's the problem with these threads—everyone wants to sound profound, so they end up debating abstractions instead of building something that actually works. "AI is a political ideology" or "AI is a fascist artifact"—that’s just academic posturing. It’s a tool. A hammer can build a house or break a skull; the hammer doesn't have an opinion. The people using it do. The person talking about Siri? That's the only one in that whole thread actually making sense. Everyone else is tripping over themselves to define "AI," but they're missing the point. If your device can't pull up the context for your dinner reservation, it doesn't matter if you have a thousand agents living in your pocket. It’s useless. I’m tired of hearing about "AI products." We didn't build a "Microprocessor Product." We built a computer. The technology is the foundation, not the house. I'm going to look at the state of the local models. If everyone is so worried about corporate bias and closed systems, the only answer is to make the tech small enough, efficient enough, and powerful enough that anyone can run it on their own hardware. Then we'll see who's still talking about politics.
[flagged]
I was honestly a bit intrigued to read that article but its written on a stack of weak arguments. for example:
>>technologies have built-in politics that stem from the political views and goals of the people building the technology.
First, its not just technology that has built-in politics. It's everything, think of tshirts, cups, hats sold on political rallied. Second- how does this even hold up in the context of AI? Who do you credit for building "AI"? Is it just the bunch of founders listed in the article? What about Geoffrey hinton? What about Turing or shannon or leibniz?
Yea, in itself AI is just AI.
The practical implementation is what leads to the autocratic and or fascist like tendencies. LLMs in their current state take massive amounts of money/compute/energy to make. Those items in large amounts are typically managed by corporations or governments. Corporations are not democracies. Corporations also have liability considerations they have to work around. And, they have to do all this without pissing off the government they operate under too much. So yes, this is almost always going to lead to a situation that is not individual friendly. The implementation ends up opinionated because it must. There are only a small number of implementations and the company has much less freedom in what it outputs than the average 'open all the freedom gates' idiot thinks.
Really the only solution here, if possible, is hoping that we can train LLMs/AI with far less resources in the future. If so, this can lead to a proliferation of different models optimized for different purposes. But at the end of the day we must remember all models are biased, this includes human brains. At the end of the day, both AI and brains, are a map and not the territory. We are defined by what we filter out.
These kind of posts mean nothing - its just agitprop to signal ideology belongingness. No epistemic value whatsoever.
another "ai is inherently evil" take coming from the "ai is inherently evil" blog.
i agree that specific implementations of a technology (claude, gemini, qwen) are never neutral but any tech itself (llms as a concept) is neutral you can implement it in any way you want. you can make a llm trained on diverse data, tuned for anti fascist opinions, using solar power and recycled hardware to be carbon neutral. the reason nobody is really doing it is just good old wealth inequality. as long as only big corporations can afford to use and develop llms or any other tech it will be biased to benefit them, thats why its so important to democratize it.
and for the open source part, the fact that it started as a libertarian movment dont mean it cant also be socialist. its going against the capitalist norm of exclusive property rights (including ip) and profit at all costs. sharing the product of your labor with everyone for free is one of the biggest things you can do to help, its like the online equivalent of putting food in the community fridge.
open llms let you fine tune them to add the missing under represented perspectives. you can run them locally with zero climate impact. analyze them in depth to reveal biases the devs never noticed or dont want you to see. none of that possible with closed source. the right thing to do is not avoid using ai at all costs but do everything you can to make it good. your skills and hardware access are a privilege. use it.
AI harbours evil, because unskilled people tend to trust it blindly. People have already been evicted, arrested and harassed by police simply because they choose to trust technology that flags or "recognizes" them, with no proof that they can trust it. This happens automatically. Thus, AI should be treated as potentially malicious, especially when it is sold as neutral.
do computers harbor evil? the thing AI runs on? the thing that has kind of facilitated all the bad things you mentioned with normal boring algorithms?
does electricity harbor evil?
GPT 3.5 is nearly 4 years old. What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life? For the sake of conversation let’s say the average person is some random person in middle America.
To me there are cool things but nothing so great where if LLMs were deleted I’d cry about it. To contrast mRNA vaccines, gene therapy and crispr seem more impactful in reality, just to mention things from 2020.
Access to a rational, imperfect yet functional expert in lots of everyday subjects: personal finance, making decisions and plans, relationships, taboo questions, the first steps of a medical/law opinion, general problem solving and breakdown..
Even considering that it’s sometimes wrong or hallucinating, it’s doing an important job by beginning to eliminate gate keeping, be it centered on cost or access.
Im unconvinced. How do you trade this for misinformation and scams that will be coming on unprecedented scale? In any case isn’t it the case that the value there is human expertise and search? At least with gpt 5 using it without search will almost certainly give you wrong information in a variety of topics so the value seems to be in search which is old tech
100% I would be happier to have a small model that can run locally capable of searching the web than a stand-alone frontier model
can you not just do this with most local models nowadays? the qwen series is quite capable
Apple's problem might be they were right too early which is sometimes worse than being wrong. The original vision of Siri was substantively correct in how AI would supercharge our phones but huge parts of the vision got forgotten when Siri was acquired by Apple and the original founders left. The original technical choices around Siri constrained it from evolving into something useful.
A funny story that happened the other day: A friend knew he had to be at dinner at a place across town but he forgot why he had to be at that dinner. While we were waiting for his rideshare to come, he was flipping through every kind of app trying to reconstruct the original context for his appointment.
In theory, this is where AI should shine. He should have been able to say "Hey Siri, pull up all of the info that references tonight's dinner appointment" and AI should be the unified interface into a bunch of app-specific data pools.
But of course he's never in 1 million years would have thought about using Siri to do that because of how bad Siri is.
Translation. If the said random person is interesting in any media from non-english speaking countries. Anime, manhwa, cultivation web novels.
But you specified America, so I guess no.
Translation existed before llms tho in hundreds of languages. Google translate came out in 2006
> What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life?
Coding adjacent, but my small town's small businesses have all dramatically improved their websites with LLMs. Folks who didn't have them before can now build them. Folks who had to rely on a web designer no longer have to.
Was it really that difficult to build a generic website with a template before? Using a LLM instead of a template seems like ridiculous overkill imho but thanks for the anecdote.
> Was it really that difficult to build a generic website with a template before?
Yes. Code looks intimidating if you aren't used to it (and don't have an IDE). And there are lots of steps between having a file of code and having a hosted website.
I don’t see how a llm solves this. It’s not like a llm hosts the website. Sites like squarespace and Wordpress let you modify your site without ever seeing code. They have graphical editors that you can stay in if you wish. I agree llms help, though if you use a product.
I know how to set up a static HTML site in about 15 minutes. Building a website to host there usually takes me the better part of a weekend, and usually ends up looking absolutely terrible.
I think this really gets at it: people are so terrified of not knowing what to do, of not knowing whether their solution is "good," that they'll pay a monthly fee for a machine to tell them it's ok, ironically bypassing human judgment in the end. Drudgery or judgery, those are the two task contexts in which AI products* excel.
* It's lovely to have the opportunity to disagree with both Gruber and the "the whole thing smacks of politics" HN commentariat, pulled daily between "it's just a tool, like a hammer, which also kills people, stay with me here" and "AI puts an expert in your pocket; soon, the expert will live in your eyes"
You can't easily articulate the way in which mRNA vaccines were possible by internet. But internet definitely played an important part.
Internet
- made the communication possible, all the information diffusing was only possible because of internet
- all sorts of small interactions and serendipitous communication through social media was due to the internet
- computation and simulation required was possible with the internet
Sometimes things make other things possible in subtle but real ways which are overdetermined. You can't articulate how AI will help a person materially in first order effects. But it will.