"Token anxiety", a slot machine by any other name

jkap.io

75 points by presbyterian 4 hours ago


ctoth - 2 hours ago

The gambling analogy completely falls apart on inspection. Slot machines have variable reward schedules by design — every element is optimized to maximize time on device. Social media optimizes for engagement, and compulsive behavior is the predictable output. The optimization target produces the addiction.

What's Anthropic's optimization target??? Getting you the right answer as fast as possible! The variability in agent output is working against that goal, not serving it. If they could make it right 100% of the time, they would — and the "slot machine" nonsense disappears entirely. On capped plans, both you and Anthropic are incentivized to minimize interactions, not maximize them. That's the opposite of a casino. It's ... alignment (of a sort)

An unreliable tool that the manufacturer is actively trying to make more reliable is not a slot machine. It's a tool that isn't finished yet.

I've been building a space simulator for longer than some of the people diagnosing me have been programming. I built things obsessively before LLMs. I'll build things obsessively after.

The pathologizing of "person who likes making things chooses making things over Netflix" requires you to treat passive consumption as the healthy baseline, which is obviously a claim nobody in this conversation is bothering to defend.

BoxFour - 2 hours ago

I wish the author had stuck to the salient point about work/life balance instead of drifting into the gambling tangent, because the core message is actually more unsettling. With the tech job market being rough and AI tools making it so frictionless to produce real output, the line between work time and personal time is basically disappearing.

To the bluesky poster's point: Pulling out a laptop at a party feels awkward for most; pulling out your phone to respond to claude barely registers. That’s what makes it dangerous: It's so easy to feel some sense of progress now. Even when you’re tired and burned out, you can still make progress by just sending off a quick message. The quality will, of course, slip over time; but far less than it did previously.

Add in a weak labor market and people feel pressure to stay working all the time. Partly because everyone else is (and nobody wants to be at the bottom of the stack ranking), and partly because it’s easier than ever to avoid hitting a wall by just "one more message". Steve Yegge's point about AI vampires rings true to me: A lot of coworkers I’ve talked to feel burned out after just a few months of going hard with AI tools. Those same people are the ones working nights and weekends because "I can just have a back-and-forth with Claude while I'm watching a show now".

The likely result is the usual pattern for increases in labor productivity. People who can’t keep up get pushed out, people who can keep up stay stuck grinding, and companies get to claim the increase in productivity while reducing expenses. Steve's suggestion for shorter workdays sound nice in theory, but I would bet significant amounts of money the 40-hour work week remains the standard for a long time to come.

simonw - 2 hours ago

I know it's popular comparing coding agents to slot machines right now, but the comparison doesn't entirely hold for me.

It's more like being hooked on a slot machine which pays out 95% of the time because you know how to trick it.

(I saw "no actual evidence pointing to these improvements" with a footnote and didn't even need to click that footnote to know it was the METR thing. I wish AI holdouts would find a few more studies.)

Steve Yegge of all people published something the other day that has similar conclusions to this piece - that the productivity boost for coding agents can lead to burnout, especially if companies use it to drive their employees to work in unsustainable ways: https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163

symfrog - 2 hours ago

If you are trying to build something well represented in the training data, you could get a usable prototype.

If you are unfamiliar with the various ways that naive code would fail in production, you could be fooled into thinking generated code is all you need.

If you try to hold the hand of the coding agents to bring code to a point where it is production ready, be prepared for a frustrating cycle of models responding with ‘Fixed it!’ while only having introduced further issues.

dcre - 2 hours ago

How are we still citing the (excellent) METR study in support of conclusions about productivity that its authors rightly insist[0] it does not support?

My paraphrase of their caveats:

- experts on their own open source proj are not representative of most software dev

- measuring time undervalues trading time for effort

- tools are noticeably better than they were a year ago when the study was conducted

- it really does take months of use to get the hang of it (or did then, less so now)

Before you respond to these points, please look at the full study’s treatment of the caveats! It’s fantastic, and it’s clear almost no one citing the study actually read it.

[0]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

Shank - 2 hours ago

I think that in a world where code has zero marginal cost (or close to zero, for the right companies), we need to be incredibly cognizant of the fact that more code is not more profit, nor is it better code. Simpler is still better, and products with taste omit features that detract from vision. You can scaffold thousands of lines of code very easily, but this makes your codebase hard to reason about, maintain, and work in. It is like unleashing a horde of mid-level engineers with spec documents and coming back in a week with everything refactored wrong. Sure you have some new buttons but does anyone (or can any AI agent, for that matter) understand how it works?

And to another point: work life balance is a huge challenge. Burnout happens in all departments, not just engineering. Managers can get burnout just as easily. If you manage AI agents, you'll just get burnout from that too.

- an hour ago
[deleted]
Aurornis - 2 hours ago

After actually using LLM coding tools for a while, some of these anti-LLM thinkpieces feel very contrived. I don’t see the comparison to gambling addiction at all. I understand how someone might believe that if they only view LLMs through second hand Twitter hot takes and think that it’s a process of typing a prompt and hoping for the best. Some people do that, but the really effective coders work with the LLM and drive the coding, writing some or much of the code themselves. The social media version of vibe coding where you just prompt continuously and hope for the best is not going to work in any serious endeavor where details matter. We see claims of it in some high profile examples like OpenClaw, but even OpenClaw has maintainers and contributors who look at the code and make decisions. It’s also riddled with security problems as a result of the YOLO coding style.

shaokind - 2 hours ago

One of my recent thoughts is that Claude Code has become the most successful agent partially because it is more of a black box than previous implementations of the agent pattern: the actual code changes aren't shoved in your face like Cursor (used to be), they are hidden away. You focus more on the result rather than the code building up that result, and so you get into the "just one more feature" mindset a lot more, because you're never concerned that the code you're building is sloppy.

htfu - 2 hours ago

Probably the best we can hope for at the moment is a reduction in the back-and-forth, increase in ability to one-shot stuff with a really good spec. The regular human work then becomes building that spec, in regular human (albeit AI-assisted) ways.

scuff3d - 39 minutes ago

Simple fix for this. When the work day is done, close the laptop and walk away. Don't link notifications to personal devices. Whatever slop it produced will be waiting for you at 8am the next morning.

wormpilled - 20 minutes ago

Pathetic.

coldtea - an hour ago

Ironically the linked text by this Kellog guy is 100% AI slop itself

jauntywundrkind - 2 hours ago

Funemployed right now joyously spending way way more time than 996, pulling the slot machine arm to get tokens, having a ball.

But that's for personal pleasure. This post is receeding from the concerns about "token anxiety," about the addiction to tokens. This post is about work culture & late capitalism anxiety, about possible pressures & systems society might impose.

I reflect a lot on AI doesn't reduce the work, it intensifies it. https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies... The spirit of this really nails something core to me. We coders especially get help doing so much of menial now. Which means we spend a lot more time making intense analysis and critiques, are much more doing the hard thought work of 'is what we have here as good as it can be'. Finding new references or patterns to feed back into the AI to steer already working implementations towards better outcomes.

And my heart tells me that corporations & work life as we know it are almost universally just really awful about supporting reflective contemplative work like this. Work wants output. It doesn't want you sit in a hammock and think about it. But increasingly I tell you the key to good successful software is Hammock Driven Development. It's time to use our brains more, in quiet reflection. https://github.com/matthiasn/talk-transcripts/blob/master/Hi...

996 sounds like garbage on its own, as a system of toil. But I also very much respect an idea of continuous work, one that also intersperses rest throughout the day. Doing some chores or going to the supermarket or playing with the kid can be an incredibly good way to let your preconscious sift through the big gnarly problems about. The response to the intensity of what we have, to me, speaks of a need to spread out the work, to de-concentrate it, to build in more than hammock time. I was on the fence about whether the traditional workday deserved to survive before AI hit, and my feels about it being a gross mismatch have massively intensified since.

As I started my post with, I personally have a much more positive experience, with what yes feels like a token addiction. But it doesn't feel like an anxiety. It feels like the greatest most exciting adventure, far beyond what I had hoped for in life ever. This is wildly fun, going far far further out than I had ever hoped to get to see. I'm not "anxiously" pulling the lever arm on the token machine, I'm just thrilled to get to do it. To have time to reflect and decide, I have 3-8 things going at once (and probably double they back burnered but open, on Niri rows!) to let myself make slower decisions, to analyze, while keeping the things that can safely move forwards moving forwards.

That also seems like something worker exploitative late capitalism is mostly hot garbage at too! Companies really try to reduce in flight activities. Sprint planning is about crafting deliberate work. But our freedom and agency here far outstrips these dusty old practices. It is anxiety inducing to be so powerful so capable & to have a bureaucracy that constraints and confines, that wants only narrow windows of our use.

Also, shame on Tim Kellogg for not God damned linking the actual post he was citing. Garbagefire move. https://writing.nikunjk.com/p/token-anxiety https://news.ycombinator.com/item?id=47021136