Thousands of CEOs just admitted AI had no impact on employment or productivity

fortune.com

326 points by virgildotcodes 5 hours ago


crazygringo - 4 hours ago

Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].

Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.

And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.

[1] https://en.wikipedia.org/wiki/Productivity_paradox

tabs_or_spaces - 18 minutes ago

My experience has been

* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.

* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions

* when working alone, I see the biggest productivity boost in ai and where I can get things done.

* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.

* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days

Herring - 4 hours ago

My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.

lukaslalinsky - 20 minutes ago

There was a recent post where someone said AI allows them to start and finish projects. And I find that exactly true. AI agents are helpful for starting proof of concepts. And for doing finishing fixes to an established codebase. For a lot of the work in the middle, I can be still useful, but the developer is more important there.

sebmellen - 4 hours ago

The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.

Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.

What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.

But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.

chrismarlow9 - 3 hours ago

The slow part as a senior engineer has never been actually writing the code. It has been:

- reviews for code

- asking stakeholders opinions

- SDLC latency (things taking forever to test)

- tickets

- documentations/diagrams

- presentations

Many of these require review. The review hell doesn't magically stop at Open source projects. These things happen internally too.

bubblewand - 4 hours ago

My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)

n_u - 4 hours ago

Original paper https://www.nber.org/system/files/working_papers/w34836/w348...

Figure A6 on page 45: Current and expected AI adoption by industry

Figure A11 on page 51: Realised and expected impacts of AI on employment by industry

Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry

These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)

A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.

DaedalusII - 5 hours ago

If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness

J_Shelby_J - 3 hours ago

It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.

- an hour ago
[deleted]
ahepp - 3 hours ago

I read an article in FT just a couple days ago claiming that increased productivity was becoming visible in economic data

> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.

good for 3 clicks: https://giftarticle.ft.com/giftarticle/actions/redeem/97861f...

virgildotcodes - 5 hours ago

https://archive.is/L70Ha

trappist - 11 minutes ago

"Admitted" as the verb in a statement like this is blatant editorialization. Did they just finally "admit" what they had been reluctant to reveal? No doubt with their heads hung in shame?

Maybe this bothers me more than it should.

matt3210 - an hour ago

I’m not sure about this. I’ve been 100% ai since jan/1 and I’m way more productive at producing code.

The non code parts (about 90% of the work) is taking the same amount of time though.

transcriptase - 41 minutes ago

Mentioning AI in an earnings call means fuck all when what they’re actually referring to is toggling on the permissions for borderline useless copilot features across their enterprise 365 deployments or being convinced to buy some tool that’s actually just a wrapper around API calls to a cheap/outdated OpenAI model with a hidden system prompt.

Yeah, if your Fortune 500 workplace is claiming to be leveraging AI because it has a few dozen relatively tech illiterate employees using it to write their em dash/emoji riddled emails about wellness sessions and teams invites for trivia events… there’s not going to be a noticeable uptick in productivity.

The real productivity comes from tooling that no sufficiently risk adverse pubco IS department is going to let their employees use, because when all of their incentives point to saying no to installing anything ever, the idea of giving the permissions required for agentic AI to do anything useful is a non-starter.

jurschreuder - 2 hours ago

Workers may see the LLM as a productivity boost because they can basically cheat a their homework.

As a CEO I see it as a massive clog up of vast amounts of content that somebody will need to check. A DDoS of any text-based system.

The other day I got a document of 155 pages in Whatsapp. Thanx. Same with pull requests. Who will check all this?

beloch - 4 hours ago

The article suggests that AI-related productivity gains could follow a J-curve. An initial decline, as initially happened with IT, followed by an exponential surge. They admit this is heavily dependent on the real value AI provides.

However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?

The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.

1broseidon - 3 hours ago

I think the 'AI productivity gap' is mostly a state management problem. Even with great models, you burn so much time just manually syncing context between different agents or chat sessions.

Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.

lqstuart - an hour ago

I was in the “AI is grossly overhyped” camp because I work on large distributed deep learning training jobs and AI is indeed worthless for those, and will likely always be worthless since the APIs change constantly and the iteration loop is too cumbersome to constantly resubmit broken jobs to a training cluster.

Then I started working on some basic grpc/fullstack crap that I absolutely do not care about, at all, but needs to be done and uses internal frameworks that are not well documented, and now Claude is my best friend at work.

The best part is everyone else’s AI code still sucks, because they ask it to do stupid crap and don’t apply any critical thinking skills to it, so I just tell AI to re-do it but don’t fuck up the error handling and use constants instead of hardcoding strings like a middle schooler, and now I’m a 100x developer fearlessly leading the charge to usher in the AI era as I play the new No Man’s Sky update on my other PC and wait for whatever agent to finish crap.

yalogin - 2 hours ago

Every technology, whether it improved existing systems and productivity or not, created new wealth by creating new services and experiences. So that is what needs to happen with this wave as well.

carefree-bob - 4 hours ago

It's not just technology, it's very hard to detect the effect of inventions in general on productivity. There was a paper pointing out that the invention of the steam engine was basically invisible in the productivity statistics:

https://www.frbsf.org/wp-content/uploads/crafts.pdf

ruddsky - 2 hours ago

If it’s helpful to anyone, I just wrote a short blog post on this exact topic. It goes into the “j-curve” which was the proposed solution to the productivity paradox, and discusses some of the empirical research around it.

https://lightsight.ai/blog/j-curve

maininformer - 4 hours ago

Thousand of companies to be replaced by leaner counterparts that learned to use AI towards greater employment and productivity

nowittyusername - 3 hours ago

As we approach the singularity things will be more noisy and things will make less and less sense as rapid change can look like chaos from inside the system. I recommend folks just take a deep breath, and just take a look around you. Regardless on your stance if the singularity is real, if AI will revolutionize everything or not, just forget all that noise. just look around you and ask yourself if things are seeming more or less chaotic, are you able to predict better or worse on what is going to happen? how far can your predictions land you now versus lets say 10 or 20 years ago? Conflicting signals is exactly how all of this looks. one account is saying its the end of the world another is saying nothing ever changes and everything is the same as it always was....

an-allen - 2 hours ago

Yep just a risk amplifier. We are having a global warming level event in computing and blindly walling into it.

rr808 - 4 hours ago

BTW the study was from September 2024 to 2025, so its the very earliest of adopters.

cmiles8 - 4 hours ago

I like AI and use it daily, but this bubble can’t pop soon enough so we can all return to normally scheduled programming.

CEOs are now on the downside of the hype curve.

They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.

pram - 4 hours ago

It’s funny because at work we have paid Codex and Claude but I rarely find a use for it, yet I pay for the $200 Max plan for personal stuff and will use it for hours!

So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?

AngryData - 3 hours ago

I think the biggest problem is calling it AI to start with. It gives people a huge misrepresentation of what it is actually capable of. It is an impressive tool with many uses, but it is not AGI.

acjohnson55 - 3 hours ago

It's weird being on here and seeing so much naysaying, because I see a radical change already happening in software development. The future is here, it's just not equally distributed.

In the past 6 months, I've gone from Copilot to Cursor to Conductor. It's really the shift to Conductor that convinced me that I crossed into a new reality of software work. It is now possible to code at a scale dramatically higher than before.

This has not yet translated into shipping at far higher magnitude. There are still big friction points and bottlenecks. Some will need to be resolved with technology, others will need organizational solutions.

But this is crystal clear to me: there is a clear path to companies getting software value to the end customer much more rapidly.

I would compare the ongoing revolution to the advent of the Web for software delivery. When features didn't have to be scheduled for release in physical shipments, it unlocked radically different approaches to product development, most clearly illustrated in The Agile Manifesto. You could also do real-time experiments to optimize product outcomes.

I'm not here to say that this is all going to be OK. It won't be for a lot of people. Some companies are going to make tremendous mistakes and generate tremendous waste. Many of the concerns around GenAI are deadly,serious.

But I also have zero doubt that the companies that most effectively embrace the new possibilities are going to run circles around their competition.

It's a weird feeling when people argue against me in this, because I've seen too much. It's like arguing with flat-earthers. I've never personally circumnavigated Antarctica, but me being wrong would invalidate so many facts my frame of reality depends on.

To me, the question isn't about the capabilities of the technology. It's whether we actually want the future it unlocks. That's the discussion I wish we were having. Even if it's hard for me to see what choice there is. Capitalism and geopolitical competition are incredible forces to reckon with, and AI is being driven hard by both.

deadbabe - 4 hours ago

The people who will be most productive with AI will be the entreprompteurs who whip up entire products and go to market faster than ever before, iterating at dangerous speeds. Lean Startup methodology on pure steroids basically.

Unfortunately I think most of the stuff they make will be shit, but they will build it very productively.

istillcantcode - 4 hours ago

Anyone read the goal lately?

SilverElfin - 4 hours ago

These surveys don’t make sense. Ask the forward thinking companies and they’ll say the opposite. The flood of anti AI productivity articles almost feel like they’re meant to lull the population into not seeing what’s about to happen to employment.

pengaru - 4 hours ago

At $dayjob GenAI has been shoved into every workflow and it's a constant source of noise and irritation, slop galore. I'm so close to walking away from the industry to resume being a mechanic, what a complete shit show.

tehjoker - 5 hours ago

There is probably a threshold effect above which the technology begins to be very useful for production (other than faking school assignments, one-off-scripts, spam, language translation, and political propaganda), but I guess we're not there yet. I'm not counting out the possibility of researchers finding a way to add long term memory or stronger reasoning abilities, which would change the game in a very disorienting way, but that would likely mean a change of architecture or a very capable hybrid tool.

casey2 - 44 minutes ago

Of course AI is bullshit. If you couldn't just use it yourself and figure that out then ask yourself why people like Bezos or Altman are perfectly happy "investing" other people's money but not their own. If they actually believed their own bullshit they would personally be investing all of their money AND taking on personal debt. Instead Bezos, a guy worth ~200B, sells 5B worth of stock to invest in "AI-adjacent" (power generation) industry, while making amazon invest 200B in data centers. Talk about conflict of interest! WTF!

johnnienaked - an hour ago

The issue with framing this as a resurrection of the productivity paradox is that AI had never even theoretically increased productivity.

I think in retrospect it's going to look very silly.

kittbuilds - 3 hours ago

[dead]

brutalc - 10 minutes ago

[dead]