I miss thinking hard

jernesto.com

815 points by jernestomg 10 hours ago


gyomu - 9 hours ago

This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."

keyle - 9 hours ago

I don't get it.

I think just as hard, I type less. I specify precisely and I review.

If anything, all we've changed is working at a higher level. The product is the same.

But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"

Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!

We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.

Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!

Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.

Just chill, it's programming. The tools just got even better.

You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.

We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.

thegrim000 - 4 hours ago

You know, I was expecting what the post would say and was prepared to dunk on it and just tell them to stop using ai then, but the builder/thinker division they presented got me thinking. How ai/vibe coding fulfills the builder, not the thinker, made me realize that I'm basically 100% thinker, 0% builder, and that's why I don't really care at all about ai for coding.

I'll spend years working on a from scratch OS kernel or a vulkan graphics engine or whatever other ridiculous project, which never sees the light of day, because I just enjoy the thinking / hard work. Solving hard problems is my entertainment and my hobby. It's cool to eventually see results in those projects, but that's not really the point. The point is to solve hard problems. I've spent decades on personal projects that nobody else will ever see.

So I guess that explains why I see all the ai coding stuff and pretty much just ignore it. I'll use ai now as an advanced form of google, and also as a last ditch effort to get some direction on bugs I truly can't figure out, but otherwise I just completely ignore it. But I guess there's other people, the builders, where ai is a miraculous thing and they're going to crazy lengths to adopt it in every workflow and have it do as much as possible. Those 'builder' types of people are just completely different from me.

electsaudit0q - an hour ago

This resonates with me alot. I've been noticing this pattern where I reach for Claude or ChatGPT for things I used to just... think through? Like debugging something weird - before I would stare at the code, trace through it mentally, maybe draw some diagrams. Now I just paste it and ask "whats wrong here".

The thing is, those 20 minutes of frustration were when the actual learning happened. When you finally figure out that its a race condition or whatever, that knowledge sticks because you earned it. When the AI just tells you, its like reading a spoiler - you know the answer but you didnt really understand the journey.

Not saying AI tools are bad, I use them constantly. But I've started forcing myself to struggle with hard problems for at least 30min before reaching for help. Sometimes I solve it myself and it feels great. Sometimes I dont and the AI helps. But that initial struggle matters I think.

Anyone else doing something similar? Curious how others are balancing the convenience vs the learning aspect.

monch1962 - 7 hours ago

As someone who's been coding for several decades now (i.e. I'm old), I find the current generation of AI tools very ... freeing.

As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.

You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.

Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.

Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.

I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.

davidmurdoch - an hour ago

I'm wondering if everyone here saying they think harder with LLM agents have never reached "flow state" while programming. I just can't imagine using 100% of my mental focus state for hours with an agent. Sure, I think differently when my coding is primarily via agent, but I've never been totally enveloped by my thoughts while doing so.

For those who have found a "flow state" with LLM agents, what's that like?

melodyogonna - 6 minutes ago

I use Aider because it allows me to retain both personalities and still benefit from AI. It truly is the best assistant I've used.

wendgeabos - 5 minutes ago

I love thinking hard, it's genuinely my favorite thing, but ... we get paid to ship.

m0rc - 5 hours ago

I think the article has a point. There seem to be two reactions among senior engineers atound me these days.

On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).

On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.

The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.

I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D

urutom - 7 hours ago

One thing this discussion made me realize is that "thinking hard" might not be a single mode of thinking.

In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.

Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.

But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.

That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.

One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.

abcde666777 - 16 minutes ago

Well, for programming work which is essentially repetition (e.g. making another website not unlike thousands of others), it's no surprise that AI programming can work wonders - you're essentially using a sophisticated form of copy paste.

But there's still a lot of programming out there which requires originality.

Speaking personally, I never was nor ever will be too interested in the former variety.

Fire-Dragon-DoL - 9 hours ago

I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way. And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.

At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.

Shipped a fix plus 2 fixes for bugs yet to be discovered.

alexpotato - 36 minutes ago

I'm a DevOps/SRE and I've spent the past couple weeks trying to vibecode as much of what I do as possible.

In some ways, it's magical. e.g. I whipped up a web based tool for analyzing performance statistics of a blockchain. Claude was able to do everything from building the gui, optimizing the queries, adding new indices to the database etc. I broke it down into small prompts so that I kept it on track and it didn't veer off course. 90% of this I could have done myself but Claude took hours where it would have taken me days or even weeks.

Then yesterday I wanted to do a quick audit of our infra using Ansible. I first thought: let's try Claude again. I gave it lots of hints on where our inventory is, which ports matter etc but it still was grinding away for several minutes. I eventually Ctrl-C'ed and used a couple one liners that I wrote myself in a few minutes. In other words, I was faster that the machine in this case.

After the above, it makes sense to me that people may have conflicting feelings about productivity. e.g. sometimes it's amazing, sometimes it does the wrong thing.

topspin - 9 hours ago

I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.

Sammi - 2 hours ago

I'm thinking much more than ever, now that the coding agent is building for me.

I strongly experience that coding agents are helping me think about stuff I wasn't able to think through before.

I very much have both of these builder and thinker personas inside me, and I just am not getting this experience with "lack of thinking" that I'm seeing so many other people write about. I have it exactly the other way around, even if I'm a similar arch type of person. I'm spending less time building and more time thinking than ever.

nunez - 7 hours ago

I will never not be upset at my fellow engineers for selling out the ONE thing that made us valuable and respected in the marketplace and trying to destroy software engineering as a career because "Claude Code go brrrrrr" basically.

It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."

sebastianmestre - an hour ago

I did competitive programming seriously between '17 and '24, then kept on coaching people

As a beginner I often thought about a problem for days before finding a solution, but this happened less and less as I improved

I got better at exploiting the things I knew, to the point where I could be pretty confident that if I couldn't solve a problem in a few hours it was because I was missing some important piece of theory

I think spending days "sitting with" a problem just points at your own weakness in solving some class of problems.

If you are making no articulable progress whatsoever, there is a pathology in your process.

Even when working on my thesis, where I would often "get stuck" because the problem was far beyond what I could solve in one sitting, I was still making progress in some direction every time.

postit - 16 minutes ago

I usually think hard. I correlate subjects with different areas to find similarities.

What I miss is having other people who likes to think and not always pushing for shallow results

DaanDL - 23 minutes ago

"I still encounter those occasionally, but the number of problems requiring deep creative solutions feels like it is diminishing rapidly."

Just let it try and solve an issue with your advanced SQLAlchemy query and see it burn. xD

qwertox - 28 minutes ago

AI is way less of a problem in regards to thinking that digital media consumption is.

I used to think about my projects when in bed, now i listen to podcasts or watch youtube videos before sleeping.

I think it has a much bigger impact than using our "programming calculator" as an assistant.

levitatorius - 2 hours ago

The post resonates deeply with me. I am a health professional in diagnostics and through the years I have observed different extremes in approaches to solving diagnostic challenges - the one extreme is to rely on "knowing", the other on "thinking/reasoning". The former is usually very fast, but not easily explainable - just like pattern recognition. The latter was slow, but could give a solution from "first principles" and possibly not described before. Of course it's a spectrum and the thinking part requires and includes the deep enough "knowing" part. One usually uses both approaches on daily work, but I have seen some people who relied much more on knowing than thinking/reasoning, sometimes to the extreme (as in refusing to diagnose a condition on their own because they "have not seen this before").

mastermedo - an hour ago

I relate to the post, but I'm not sure it's hitting the nail on the head _for me_.

I like being useful, and I'm not yet sure how much of what I'm creating with AI is _me_, and how much it is _it_. It's hard to derive as much purpose/meaning from it compared to the previous reality where it was _all me_.

If I compare it to a real world problem; e.g. when I unplug the charging cable from my laptop at my home desk, the charging cable slides off the table. I could order a solution online that fixes the problem and be done with it, but I could also think how _I_ can solve the problem with what I already have in my spare parts box. Trying out different solutions makes me think and I'm way more happy with the end result. Every time I unplug the cable now and it stays in place it reminds me of _my_ labour and creativity (and also the cable not sliding down the table -- but that's besides the point).

r-johnv - 10 hours ago

I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.

That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?

Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.

Insanity - an hour ago

Advent of Code (which given my schedule runs into January). That’s the last time thinking hard about a coding problem, I don’t remember exactly if it was day 10 or 11 that had me scratching my head for a while.

I intentionally do not use AI though.

But I sympathize with the author. I enjoy thinking deeply about problems which is why I studied compsci and later philosophy, and ended up in the engineering field. I’m an EM now so AI is less of an “immediate threat” to my thinking habits than the role change was.

That said, I did recently pick up more philosophy reading again just for the purpose of challenges my brain.

chairmansteve - 2 hours ago

I find I think harder with AI programming. It generates the code, but I have to approve the overall design and then approve every single line. I will edit and rearrange the code until it is correct.

But since the AI is generating a lot of code, it is challenging me. It also allows me to tackle problems in unfamiliar areas. I need to properly understand the solutions, which again is challenging. I know that if I don't understand exactly what the code is doing and have confidence in the design and reliability, it will come back and bite me when I release it into the wild. A lesson learnt the hard way during many decades of programming.

ontouchstart - an hour ago

After leaving my previous day job, I have some downtime to get back to thinking and realizing how much I love reading and thinking.

Contemplating the old RTFM, I started a new personal project called WTFM and spends time writing instead of coding. There is no agenda and product goals.

https://wtfm-rs.github.io/

There are so many interesting things in human generated computer code and documentation. Well crafted thoughts are precious.

jonahrd - 42 minutes ago

Dear author, I suggest trying out a job in a niche part of the field like firmware/embedded. Bonus if it's a company with a bunch of legacy devices to maintain. AI just hasn't quite grokked it there yet and thinking still reigns supreme :)

7777332215 - 5 hours ago

Seems like a lot of people use AI to code in their private commercial IP and products. How are people not concerned with the fact that all these ai companies have the source code to everything? Your just helping them destroy your job. Code is not worthless, you cannot easily duplicate any complex project with equal features, quality, and stability.

ninadwrites - an hour ago

You're extremely on point. I don't remember the last time I was able to sit around to think because at the back of my mind, I knew AI could help me generate the initial draft of ideas, logic, or structure that would otherwise require hours of my time.

But to be honest, those hours spent structuring thoughts are so important to making things work. Or you might as well get out of the way and let AI do everything, why even pretend to work when all we're going to do is just copy and paste things from AI outputs?

Nevermark - 38 minutes ago

i have often pondered if the (sometimes facetiously, sometimes seriously) postulated AI utopia scenario of humans who don’t need to work but can devote their time to art and recreational pursuits, might be a hellscape for many industrious people.

This essay captures that.

Even the pure artist, for whom utility may not seem to matter, manufactures meaning not just from creative exploration directly, but also from the difficulty (which can take many forms) involved in doing something genuinely new, and what they learn from that.

What happens to that when we even have “new” on tap.

InfiniteRand - an hour ago

This isn’t the point of the piece, but I have found that the thinker often gets in the way of the builder, because there’s always a better way to build, there’s always some imperfect subsystem you just want to tear out and rewrite and then you realize you were all wrong about this and that, etc.

More to the piece itself, I know some crusty old embedded engineers who feel the same way about compilers as this guy does about AI, it doesn’t invalidate his point but it’s food for thought

novoreorx - 8 hours ago

To be honest, I do not quite understand the author's point. If he believes that agentic coding or AI has negative impact on being a thinker, or prevent him from thinking critically, he can simply stop using them.

Why blame these tools if you can stop using them, and they won't have any effect on you?

In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"

To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.

ccortes - 8 hours ago

People here seem to be conflating thinking hard and thinking a lot.

Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.

Underqualified - 4 hours ago

This resonates with me, but I quit programming about a decade ago when we were moving from doing low level coding to frameworks. It became no longer about figuring out the actual problem, but figuring out how to get the framework to solve it and that just didn't work for me.

I do miss hard thinking, I haven't really found a good alternative in the meantime. I notice I get joy out of helping my kids with their, rather basic, math homework, so the part of me that likes to think and solve problems creatively is still there. But it's hard to nourish in today's world I guess, at least when you're also a 'builder' and care about efficiency and effectiveness.

smy20011 - 6 hours ago

I miss entering flow state when coding. When vibe coding, you are in constant interruption and only think very shallow. I never see anyone enter flow state when vibe coding.

ChaitanyaSai - 9 hours ago

I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.

hoppp - 28 minutes ago

All the time. Been working on my own projects, they all require hard thinking.

phamilton - 8 hours ago

I think harder because of AI.

I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.

I've thought harder about problems this last year than I have in a long time.

keiferski - 2 hours ago

This title and the first half of the post (before the AI discussion) just makes me miss the intellectual environment of college.

So I’m tempted to say that this is just a part of the economic system in general, and isn’t specifically linked to AI. Unless you’re lucky enough to grab a job that requires deep intellectual work, your day job will probably not challenge your mental abilities as much as a difficult college course does.

Sad but true, but unfortunately I don’t think any companies are paying people to think deeply about metaphysics (my personal favorite “thinking hard subject” from college.)

nate - 5 minutes ago

author obviously isn't wrong. it's easy to fall into this trap. and it does take willpower to get out of it. and the AI (christ i'm going to sound like they paid me) can actually be a tool to get there.

i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).

of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.

but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:

1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened. 2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast". 3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way". 4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.

Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.

joshpicky - 10 hours ago

I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.

BoostandEthanol - 5 hours ago

I’d been feeling this until quite literally yesterday, where I sort of just forced myself to not touch an AI and grappled with the problem for hours. Got myself all mixed up with trig and angles until I got a headache and decided to back off a lot of the complexity. I doubt I got everything right, I’m sure I could’ve had a solution with near identical outputs using an AI in a fraction of the time.

But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.

andyferris - 7 hours ago

My solution has been to lean into harder problems - even as side projects, if they aren't available at work.

I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.

The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).

lccerina - 5 hours ago

"Oh no, I am using a thing that no one is forcing me to use, and now I am sad".

Just don't use AI. The idea that you have ship ship ship 10X ship is an illusion and a fraud. We don't really need more software

oa335 - 9 hours ago

I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.

The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”

The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.

freshbreath - 4 hours ago

"I don't want to have to write this for the umpteenth time" -- Don't let it even reach a -teenth. Automate it on the 2nd iteration. Or even the 1st if you know you'll need it again. LLMs can help with this.

Software engineers are lazy. The good ones are, anyway.

LLMs are extremely dangerous for us because it can easily become a "be lazy button". Press it whenever you want and get that dopamine hit -- you don't even have to dive into the weeds and get dirty!

There's a fine line between "smart autocomplete" and "be lazy button". Use it to generate a boilerplate class, sure. But save some tokens and fill that class in yourself. Especially if you don't want to (at your own discretion; deadlines are a thing). But get back in those weeds, get dirty, remember the pain.

We need to constantly remind ourselves of what we are doing and why we are doing it. Failing that, we forget the how, and eventually even the why. We become the reverse centaur.

And I don't think LLMs are the next layer of abstraction -- if anything, they're preventing it. But I think LLMs can help build that next layer... it just won't look anything like the weekly "here's the greatest `.claude/.skills/AGENTS.md` setup".

If you have to write a ton of boilerplate code, then abstract away the boilerplate in code (nondeterminism is so 2025). And then reuse that abstraction. Make it robust and thoroughly tested. Put it on github. Let others join in on the fun. Iterate on it. Improve it. Maybe it'll become part of the layers of abstraction for the next generation.

chuliomartinez - 2 hours ago

I guess it depends on what you build, i feel the most complex part of the deal, that makes me think the hardest, is to figure out what to build. Eg understanding the client, and creating a solution that fits between their needs and abilities. The rest is often a technical detail, yes sometimes you need to deep dive to optimize. Anyway if you miss debugging try debugging people;)

lukewarmdaisies - an hour ago

Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems. Go to a library if you have to! People act like victims to the machine when it comes to building their thinking muscles and AI and it confuses me.

practal - 5 hours ago

I see the current generation of AI very much as a thing in between. Opus 4.5 can think and code quite well, but it cannot do these "jumps of insight" yet. It also struggles with straightforward, but technically intricate things, where you have to max out your understanding of the problem.

Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.

For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve on my own. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.

What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will take care of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.

I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insight" themselves, and for how long we can jump higher than they can.

- 2 hours ago
[deleted]
cbdevidal - 3 hours ago

It’s possible to be both.

The last time I had to be a Thinker was because I was in Builder mode. I’ve been trying to build an IoT product but I’ve been wayyyy over my head because I knew maybe 5% of what I needed to be successful. So I would get stuck many, many times, for days or weeks at a time.

I will say though that AI has made the difference in the last few times I got stuck. But I do get more enjoyment out of Building than Thinking, so I embrace it.

rammy1234 - 8 hours ago

Great article. Moment I finished reading this article, I thought of my time in solving a UI menu problem with lot of items in it and algorithm I came up with to solve for different screen sizes. It took solid 2 hrs of walking and thinking. I still remember how I was excited when I had the feeling of cracking the problem. Deep thinking is something everyone has it within and it varies how fast you can think. But we all got it with right environment and time we all got it in us. But thats long time ago. Now I always off load some thinking to AI. it comes up with options and you just have to steer it. By time it is getting better. Just ask it you know. But I feel like it is good old days to think deep by yourself. Now I have a partner in AI to think along with me. Great article.

getnormality - 2 hours ago

If you miss challenge, the world has plenty more. Maybe it's not all your comfort zone, but if you try being a little ambitious and maybe use AI to understand a field you're not already deeply familiar with, you can continue to grow.

jsattler - 6 hours ago

I had similar thoughts recently. I wouldn't consider myself "the thinker", but I simply missed learning by failure. You almost don't fail anymore using AI. If something fails, it feels like it's not your fault but the AI messed up. Sometimes I even get angry at the AI for failing, not at myself. I don't have a solution either, but I came up with a guideline on when and how to use AI that has helped me to still enjoy learning. I'm not trying to advertise my blog and you don't need to read it, the important part is the diagram at the end of "Learning & Failure": https://sattlerjoshua.com/writing/2026-02-01-thoughts-on-ai-.... In summary, when something is important and long-term, I heavily invest into understanding and use an approach that maximizes understanding over speed. Not sure if you can translate it 100% to your situation but maybe it helps to have some kind of guideline, when to spend more time thinking instead of directly using and AI to get to the solution.

bariswheel - 9 hours ago

Good highlight of the struggle between Builder and Thinker, I enjoyed the writing. So why not work on PQC? Surely you've thought about other avenues here as well.

If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.

ahyangyi - 2 hours ago

In most research areas, if a few days thinking is good enough to reach a worthwhile conclusion, it's not "thinking hard". It's "low-hanging fruit".

cladopa - 5 hours ago

I believe the article is wrong in so many ways.

If you think too much you get into dead ends and you start having circular thoughts, like when you are lost in the desert and you realise you are in the same place again after two hours as you have made a great circle(because one of your legs is dominant over the other).

The thinker needs feedback on the real world. It needs constant testing of hypothesis on reality or else you are dealing with ideology, not critical thinking. It needs other people and confrontation of ideas so the ideas stay fresh and strong and do not stagnate in isolation and personal biases.

That was the most frustrating thing before AI, a thinker could think very fast, but was limited in testing by the ability to build. Usually she had to delegate it to people that were better builders, or else she had to be builder herself, doing what she hates all the time.

Thanemate - an hour ago

The crowd that counterpoints with "just don't use it then" miss the point: The general population lacks the ability to judge when should use it and when they shouldn't. The average person will always lean towards the less effortful option, without awareness of the long term consequences.

On top of that, the business settings/environments will always lean towards the option that provides the largest productivity gains, without awareness of the long term consequences for the worker. At that environment, not using it is not an option, unless you want to be unemployed.

Where does that leave us? Are we supposed to find the figurative "gym for problem solving" the same way office workers workout after work? Because that's the only solution I can think of: Trading off my output for problem solving outside of work settings, so that I can improve my output with the tool at work.

foxmoss - 9 hours ago

Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.

dchftcs - an hour ago

you still can think hard but you can offload some parts to LLM when you're stuck. Then you can leave space for more hard-won inspiration. When you're faced with a high-stakes decision, evaluating all sorts of possibilities, it's really easy to maximize the utilization of your brain, so in those cases you have plenty of chance to think hard.

lxgr - 5 hours ago

I've had the completely opposite experience as somebody that also likes to think more than to build: LLMs take much of the legwork of actually implementing a design, fixing trivial errors etc. away from me and let me validate theories much more quickly than I could do by myself.

More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.

This happens way less with LLMs, as they provide natural time to think while they churn away at doing.

Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.

6mirrors - 7 hours ago

The sampling rate we use to take input information is fixed. And we always find a way to work with the sampled information, no matter if the input information density is high or low.

We can play a peaceful game and a intense one.

Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.

A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.

pyreal - 2 hours ago

The author clearly loves coding more than the output from coding. I'm thinking harder than ever and so grateful I can finally think hard about the output I really want rather than how to resolve bugs or figure out how to install some new dependency.

danavar - 8 hours ago

Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.

fattybob - 2 hours ago

Thinking hard and fast with positive results is like a drug, ah those were good and rewarding days in my past, would jump back into that work framework any time ( that was running geological operations in an unusually agile oil exploration programme )

enthus1ast_ - 3 hours ago

When I wrote nimja's template inheritance. I thought about it multiple days, until, during a train commute, it made click and I had to get out my notebook and write it, directly in the train. Then some month later I found out, I had the same bug that jinja2 had fixed years ago. So I felt kinda like a brothers in hard thinking :)

rc-1140 - 9 hours ago

I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.

While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.

Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.

I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.

I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.

petterroea - 4 hours ago

I've missed the same even since before AI because I've done far too much work that's simple but time intensive. It's frustrating, and I miss problems that keep me up all night.

Reverse engineering is imo the best way of getting the experience of pushing your thinking in a controlled way, at least if you have the kind of personality where you are stubborn in wanting to solve the problem.

Go crack an old game or something!

jillesvangurp - 4 hours ago

You can't change the world, you can change yourself. Many people don't like change. So, people get frustrated when the world inevitably changes and they fail to adapt. It's called getting older. Happens to us all.

I'm not immune to that and I catch myself sometimes being more reluctant to adapt. I'm well aware and I actively try to force myself to adapt. Because the alternative is becoming stuck in my ways and increasingly less relevant. There are a lot of much younger people around me that still have most of their careers ahead of them. They can try to whine about AI all they want for the next four decades or so but I don't think it will help them. Or they can try to deal with the fact that these tools are here now and that they need to learn to adapt to them whether they like it or not. And we are probably going to see quite some progress on the tool front. It's only been 3 years since ChatGPT had its public launch.

To address the core issue here. You can use AI or let AI use you. The difference here is about who is in control and who is setting the goals. The traditional software development team is essentially managers prompting programmers to do stuff. And now we have programmers prompting AIs to do that stuff. If you are just a middle man relaying prompts from managers to the AI, you are not adding a lot of value. That's frustrating. It should be because it means apparently you are very replaceable.

But you can turn that around. What makes that manager the best person to be prompting you? What's stopping them from skipping that entirely? Because that's your added value. Whatever you are good at and they are not is what you should be doing most of your time. The AI tools are just a means to an end to free up more time for whatever that is. Adapting means figuring that out for yourself and figuring out things that you enjoy doing that are still valuable to do.

There's plenty of work to be done. And AI tools won't lift a finger to do it until somebody starts telling them what needs doing. I see a lot of work around me that isn't getting done. A lot of people are blind to those opportunities. Hint: most of that stuff still looks like hard work. If some jerk can one shot prompt it, it isn't all that valuable and not worth your time.

Hard work usually involves thinking hard, skilling up, and figuring things out. The type of stuff the author is complaining he misses doing.

erelong - 8 hours ago

You were walking to your destination which was three miles away

You now have a bicycle which gets you there in a third of the time

You need to find destinations that are 3x as far away than before

theworstname - 8 hours ago

If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.

charcircuit - 5 hours ago

If you are thinking hard I think you are software engineering wrong. Even before AI. As an industry all the different ways of doing things have already played out. Even doing big reactors or performance optimizations often can not be 100% predicted in their effectiveness. You will want to just go ahead and implement these things over spending more time thinking. And as AI gets stronger the just try a bunch of approaches will beat the think hard approach by an even bigger margin.

noodleweb - 4 hours ago

I miss this too, I have had those moments of reward where something works and I want to celebrate. It's missing too for me.

With AI the pros outweigh the cons at least at the moment with what we collectively have figured out so far. But with that everyday I wonder if it's possible now to be more ambitious than ever and take on much bigger problem with the pretend smart assistant.

armchairhacker - 8 hours ago

Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?

I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.

harrisonjackson - 8 hours ago

I believe it is a type of burnout. AI might have accelerated both the work and that feeling.

I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...

nubinetwork - 3 hours ago

> the number of times I truly ponder a problem for more than a couple of hours has decreased tremendously

Isn't that a good thing? If you're stuck on the same problem forever, then you're not going to get past it and never move on to the next thing... /shrug

tomquirk - 5 hours ago

The answer to this is to shift left into product/design.

Sure, I'm doing less technical thinking these days. But all the hard thinking is happening on feature design.

Good feature design is hard for AI. There's a lot of hidden context: customer conversations, unwritten roadmaps, understanding your users and their behaviour, and even an understanding of your existing feature set and how this new one fits in.

It's a different style of thinking, but it is hard, and a new challenge we gotta embrace imo.

tolerance - 7 hours ago

I’d love to be able to see statistics that show LLM use and reception according to certain socioeconomic factors.

phromo - 8 hours ago

I am thinking harder than ever due to vibe coding. How will markets shift? What will be in demand? How will the consumer side adapt? How do we position? Predicting the future is a hard problem... The thinker in me is working relentlessly since December. At least for me the thinker loves an existential crisis like no other.

est - 3 hours ago

I wrote a blog about this as well

Hard Things in Computer Science, and AI Aren't Fixing Them

https://blog.est.im/2026/stderr-04

felipelalli - 2 hours ago

Me: I put your text into AI and ask it to summarize. We really do have a critical problem of mental laziness.

rcvassallo83 - 8 hours ago

Thinking harder than I have in a long time with AI assisted coding.

As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.

I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.

The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.

The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.

frgturpwd - 3 hours ago

It seems like what you miss is actually a stable cognitive regime built around long uninterrupted internal simulation of a single problem. This is why people play strategy video games.

mightymosquito - 6 hours ago

While I see where you are coming from but I think what has really gone for a toss is the utility of thinking hard.

Thinking hard has never been easier.

I think AI for an autodidact is a boon. Now I suddenly have a teacher who is always accessible and will teach me whatever I want for as long as I want exactly the way I want and I don;t have to worry about my social anxiety kicking in.

Learn advanced cryptography? AI, figure out formal verification - AI etc.

johanvts - 6 hours ago

I dont think LLMs really took away much thinking, for me they replaced searching stackexchange to find incantations. Now I can get them instantly and customized to my situation. I miss thinking hard too, but I dont blame that on AI, its more that as a dev you are paid to think the absolute minimal amount needed to solve an issue or implement a feature. I dont regret leaving academia, but being paid to think I will always miss.

msephton - 5 hours ago

I'm not sure I agree. Actually, I don't agree. You only stop thinking hard if you decide to stop thinking hard. Nobody, no tool, is forcing you to stop thinking, pushing, reaching. If the thinking ceiling has changed, which I think it has, then it's entirely up to you to either move with it or stay still.

userbinator - 7 hours ago

In my experience you will need to think even harder with AI if you want a decent result, although the problems you'll be thinking about will be more along the lines of "what the hell did it just write?"

The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.

porcoda - 9 hours ago

> At the end of the day, I am a Builder. I like building things. The faster I build, the better.

This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.

Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.

- 5 hours ago
[deleted]
jbrooks84 - 2 hours ago

You are doing something wrong. Ai has not taken away thinking hard

AdieuToLogic - 9 hours ago

Cognitive skills are just like any other - use them and they will grow, do not and they will decline. Oddly enough, the more one increases their software engineering cognition, the less the distance between "The Builder" and "The Thinker" becomes.

zkmon - 7 hours ago

When people missed working hard, they turned to fake physical work (gyms). So people now need some fake thinking work.

Except for eating and sleeping, all other human activities are fake now.

ertucetin - 5 hours ago

It’s the journey, not the destination, but with AI it’s only the destination, and it takes all the joy.

tbs1980 - 5 hours ago

I read something similar here https://open.substack.com/pub/strangeloopcanon/p/on-thinkers...

tevli - 4 hours ago

Exactly what I've been thinking. outsourcing tasks and thinking of problems to AI just seems easier these days; and you still get to feel in charge because you're the one still giving instructions.

rozumem - 6 hours ago

I can relate to this. Coding satisifies my urge to build and ship and have an impact on the world. But it doesn't make me think hard. Two things which I've recently gravitated to outside of coding which make me think: blogging and playing chess.

Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.

ccppurcell - 5 hours ago

In my experience, the so-called 1% are mostly just thinkers and researchers who have dedicated a lot more time from an earlier age to thinking and/or researching. There are a few geniuses out there but it's 1 in millions not in hundreds.

ggm - 8 hours ago

A lot of productive thinking happens when asleep, in the shower, in flow walking or cycling or rowing.

It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.

Definitely thinking about the problem can be a lot better than actually having to produce it.

moorebob - 2 hours ago

My mindset last year: I am now a mentor to a junior developer

My mindset this year: I am an engineering manager to a team of developers

If the pace of AI improvement continues, my mindset next year will need to be: I am CEO and CTO.

I never enjoyed the IC -> EM transition in the workplace because of all the tedious political issues, people management issues and repetitive admin. I actually went back to being an IC because of this.

However, with a team of AI agents, there's less BS, and less holding me back. So I'm seeing the positives - I can achieve vastly more, and I can set the engineering standards, improve quality (by training and tuning the AI) and get plenty of satisfaction from "The Builder" role, as defined in the article.

Likewise I'm sure I would hate the CEO/CTO role in real life. However, I am adapting my mindset to the 2030s reality, and imagining being a CEO/CTO to an infinitely scalable team of Agentic EMs who can deliver the work of hundreds of real people, in any direction I choose.

How much space is there in the marketplace if all HN readers become CEOs and try to launch their own products and services? Who knows... but I do know that this is the option available to me, and it's probably wise to get ahead of it.

martin1975 - 4 hours ago

I've been writing C/C++/Java for 25 years and am trying to learn forex disciplined, risk managed forex trading, It's a whole new level of hard work/thinking.

Dr_Birdbrain - 9 hours ago

I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.

zepesm - 4 hours ago

That's why i'm still pushing bytes on C64 demoscene (and recommend such a niche as a hobby to anyone). It's great for the sanity in modern ai-driven dev-world ;)

raincole - 9 hours ago

I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.

saturatedfat - 7 hours ago

I think for days at a time still.

I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.

If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.

koakuma-chan - 9 hours ago

What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.

muyuu - 4 hours ago

this also used to happened to me when I in a position that involved a lot of research earlier on and then after the product was a reality, and it worked, it tapered off to be small improvements and maintenance

I can imagine many positions work out this way in startups

it's important to think hard sometimes, even if it means taking time off to do the thinking - you can do it without the socioeconomic pressure of a work environment

sbinnee - 7 hours ago

What OP wants to say is that they miss the process of thinking hard for days and weeks and one day this brilliant idea popping up on their bed before sleep. I lost my "thinking hard" process again too today at work against my pragmatism, or more precisely my job.

scionni - 6 hours ago

I have a very similar background and a very similar feeling when i think of programming nowadays.

Personally, I am going deeper in Quantum Computing, hoping that this field will require thinkers for a long time.

Meneth - 3 hours ago

I knew this sort of thing would happen before it was popular. Accordingly:

Never have I ever used an LLM.

tbmtbmtbmtbmtbm - 9 hours ago

Make sure you start every day with the type of confidence that would allow you to refer to yourself as an intellectual one-percenter

fatfox - 6 hours ago

Just sit down and think hard. If it doesn’t work, think harder.

- 5 hours ago
[deleted]
voidUpdate - 4 hours ago

If you miss the experience of not using LLMs, then just... don't? Is someone forcing you to code with LLM help?

- 7 hours ago
[deleted]
emsign - 2 hours ago

I miss hard thinking people.

- 7 hours ago
[deleted]
dhananjayadr - 7 hours ago

The author's point is, If you use AI to solve the problem and after the chat gives you the solution you say “oh yes, ok, I understand it, I can do it”(and no, you can’t do it).

Animats - 9 hours ago

"Sometimes you have to keep thinking past the point where it starts to hurt." - Fermi

z3t4 - 7 hours ago

I always search the web, ask others, or read books in order to find a solution. When I do not find an answer from someone else, that's where I have to think hard.

hpone91 - 5 hours ago

Just give Umineko a play/readthrough to get your deep thinking gray cells working again.

zatkin - 9 hours ago

I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.

yehoshuapw - 6 hours ago

have a look at https://projecteuler.net/

for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)

thorum - 5 hours ago

> but the number of problems requiring deep creative solutions feels like it is diminishing rapidly.

If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.

The solution is indeed to work on bigger problems. If you can’t find any, look harder.

keithnz - 9 hours ago

I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI

sfink - 9 hours ago

I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.

And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:

The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.

I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.

Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)

Bengalilol - 4 hours ago

Cognitive debt lies ahead for all of us.

tietjens - 6 hours ago

I wish the author would give some examples of what he wants to think hard about.

macmac_mac - 4 hours ago

reading this made me realize i used to actually think hard about bugs and design tradeoffs because i had no choice

spacecadet - 2 hours ago

Every day man... Thinking hard on something is a conscious choice.

Besibeta - 10 hours ago

The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.

woah - 8 hours ago

Just work on more ambitious projects?

marcus_holmes - 6 hours ago

I think it's just another abstraction layer, and moves the thinking process from "how do I solve this problem in code?" to "how do I solve this problem in orchestration?".

I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.

The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.

gethly - 2 hours ago

Skill issue

capl - 5 hours ago

That’s funny cause I feel the opposite: LLMs can automate, in a sloppy fashion, building the first trivial draft. But what remains is still thinking hard about the non trivial parts.

jurgenaut23 - 3 hours ago

Man, this resonates SO MUCH with me. I have always loved being confronted with a truly difficult problem. And I always had that (obviously misguided, but utterly motivating) feeling that, with enough effort, no problem could ever resist me. That it was just a matter of grinding a bit further, a bit longer.

This is why I am so deeply opposed to using AI for problem solving I suppose: it just doesn’t play nice with this process.

makerdiety - 2 hours ago

If you don't have to think, then what you're building isn't really news worthy.

So, we have an inflation of worthless stuff being done.

conception - 7 hours ago

“We now buy our bread… it comes sliced… and sure you can just go and make your sandwich and it won’t be a rustic, sourdough that you spent months cultivating. Your tomatoes will be store bought not grown heirlooms. In the end… you have lost the art of baking bread. And your sandwich making skills are lost to time… will humanity ever bake again with these mass factories of bread? What have we lost! Woe is me. Woe is me.”

mw888 - 8 hours ago

Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.

These are also tasks the AI can succeed at rather trivially.

Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.

Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.

Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.

...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.

These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.

kypro - 2 hours ago

Maybe this is just me, but I don't miss thinking so much. I personally quite like knowing how to do things and being able to work productively.

For me it's always been the effort that's fun, and I increasingly miss that. Today it feels like I'm playing the same video game I used to enjoy with all the cheats on, or going back to an early level after maxing out my character. In some ways the game play is the same, same enemies, same map, etc, but the action itself misses the depth that comes from the effort of playing without cheats or with a weaker character and completing the stage.

What I miss personally is coming up with something in my head and having to build it with my own fingers with effort. There's something rewarding about that which you don't get from just typing "I want x".

I think this craving for effort is a very human thing to be honest... It's why we bake bread at home instead just buying it from a locally bakery that realistically will be twice as good. The enjoyment comes from the effort. I personally like building furniture and although my furniture sucks compared to what you might be able buy at store, it's so damn rewarding to spend days working on something then having a real physical thing that you can use that you build from hand.

I've never thought of myself as someone who likes the challenge of coding. I just like building things. And I think I like building things because building things is hard. Or at least it was.

larodi - 8 hours ago

Well thinking hard is still there if you work on hard abstract problems. I keep thinking very hard, even though 4 CCs pump code while I do this. Besides, being a Gary Kasparov, playing on several tables, takes thinking.

sergiotapia - 9 hours ago

With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.

I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.

saulpw - 8 hours ago

The ziphead era of coding is over. I'll miss it too.

anonymous344 - 9 hours ago

yes but you solved problems already solved by someone else. how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction

pixelmelt - 8 hours ago

Would like to follow your blog, is there an rss feed?

dudeinjapan - 6 hours ago

If you feel this way, you arent using AI right.

For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?

I do not miss doing grunt work!

vasco - 7 hours ago

You don't have to miss it, buy a differential equation book and do one per day. Play chess on hard mode. I mean there's so many ways to make yourself think hard daily, this makes no sense.

It's like saying I miss running. Get out and run then.

sublinear - 8 hours ago

> I have tried to get that feeling of mental growth outside of coding

A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.

I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.

All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.

My point is, you might want to reconsider how much you blame AI.

rustystump - 9 hours ago

At the day job there was a problem with performance loading data in an app.

7 months later waffling on it on and off with and without ai I finally cracked it.

Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though

bowsamic - 5 hours ago

I specifically spend my evenings reading Hegel and other hard philosophy as well as writing essays just to force myself to think hard

hahahahhaah - 9 hours ago

I think AI didn't do this. Open source, libraries, cloud, frameworks and agile conspired to do this.

Why solve a problem when you can import library / scale up / use managed kuberneted / etc.

The menu is great and the number of problems needing deep thought seems rare.

There might be deep thought problems on the requirements side of things but less often on the technical side.

foxes - 5 hours ago

I think I miss my thinking..

bigstrat2003 - 9 hours ago

Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.

d--b - 6 hours ago

Why not think hard about what to build instead of how to build it?

the_af - an hour ago

The article is interesting. I don't know how I feel about it, though I'm both a user of AI (no choice anymore in the current job environment) and vaguely alarmed by it; I'm in the camp of those who fear for the future of our profession, and I know the counterarguments but I'm not convinced.

A couple of thoughts.

First, I think the hardness of the problems most of us solve is overrated. There is a lot of friction, tuning things, configuring things right, reading logs, etc. But are the problems most of us are solving really that hard? I don't think so, except for those few doing groundbreaking work or sending rockets to space.

Second, even thinking about easier problems is good training for the mind. There's that analogy that the brain is a "muscle", and I think it's accurate. If we always take the easy way out for the easier problems, we don't exercise our brains, and then when harder problems come up what will we do?

(And please, no replies of the kind "when portable calculators were invented...").

cranberryturkey - 7 hours ago

There's an irony here -- the same tools that make it easy to skim and summarize can also be used to force deeper thinking. The problem isn't the tools, it's the defaults.

I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.

The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.

So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.

kovkol - 4 hours ago

I mean I spent most of my career been being pressured to move from type 3 to any one of the other 2 so I don't blame AI for this (it doesn't help, though, especially if you delegate to much to it).

ares623 - 7 hours ago

Rich Hickey and the Clojure folks coined the term Hammock Driven Development. It was tongue in cheek but IMO it is an ideal to strive towards.

rvz - 7 hours ago

Great, so does that mean that it is time to vibe code our own alternatives of everything such as the Linux kernel because the AI is sure 'smarter' than all of us?

Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.

That was just the beginning.

kamaal - 8 hours ago

To me thinking hard involved the following steps-

1. Take a pen and paper.

2. Write down what we know.

3. Write down where we want to go.

4. Write down our methods of moving forward.

5. Make changes to 2, using 4, and see if we are getting closer to 3. And course correct based on that.

I still do it a lot. LLM's act as assist. Not as a wholesale replacement.

drawnwren - 8 hours ago

"Before you read this post, ask yourself a question: When was the last time you truly thought hard? ... a) All the time. b) Never. c) Somewhere in between."

What?

yieldcrv - 4 hours ago

man, setting up worktrees for parallelized agentic coding is hard, setting up containerized worktrees is hard so you can run with dangerous permissions on without nuking host system

deciding whether to use that to work on multiple features on the same code base, or the same feature in multiple variations is hard

deciding whether to work on a separate project entirely while all of this is happening is hard and mentally taxing

planning all of this up for a few hours and watching it go all at once autonomously is satisfying!

tehjoker - 8 hours ago

Why not find a subfield that is more difficult and requires some specialization then?

ars - 8 hours ago

I think hard all the time, AI can only solve problems for me that don't require thinking hard. Give it anything more complex and it's useless.

I use AI for the easy stuff.

Der_Einzige - 9 hours ago

Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"

Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.

https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder

https://dokumen.pub/the-philosophy-of-redemption-die-philoso...

Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.

https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...

themafia - 9 hours ago

> Yes, I blame AI for this.

Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.

tayo42 - 9 hours ago

> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art

I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.

- 9 hours ago
[deleted]
defraudbah - 5 hours ago

another AI blame/praise/adapt.. you definitely didn't think hard about this one, did you

okokwhatever - 2 hours ago

I get it and somehow also agree with the division (thinker/builder) but I feel this is only the representation of a new society where less humans are necessary to think deeply. No offense here, it's just my own unsatisfacted brain trying to adapt to a whole new era.

LoganDark - 7 hours ago

Every time I try to use LLMs for coding, I completely lose touch with what it's doing, it does everything wrong and it can't seem to correct itself no matter how many times I explain. It's so frustrating just trying to get it to do the right thing.

I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.

I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.

I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.

My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.

I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.

Thanemate - 3 hours ago

I am one of those junior software developers who always struggled with starting their own projects. Long story short, I realized that my struggle stems from my lack of training in open-ended problems, where there are many ways to go about solving something, and while some ways are better than others, there's no clear cut answer because the tradeoffs may not be relevant with the end goal.

I realized that when a friend of mine gave me Factorio as a gift last Christmas, and I found myself facing the exact same resistance I'm facing while thinking about working on my personal projects. To be more specific, it's a fear and urge of closing the game and leaving it "for later" the moment I discover that I've either done something wrong or that new requirements have been added that will force me to change the way my factories connect with each other (or even their placement). Example: Tutorial 4 has the players introduced to research and labs, and this feeling appears when I realize that green science requires me to introduce all sorts of spaghetti just to create the mats needed for green science!

So I've done what any AI user would do and opted to use chatGPT to push through the parts where things are either overwhelming, uncertain, too open-ended, or everything in between. The result works, because the LLM has been trained to Factorio guides, and goes as far as suggesting layouts to save myself some headache!

Awesome, no? Except all I've done is outsource the decision of how to go about "the thing" to someone else. And while true, I could've done this even before LLM's by simply watching a youtube video guide, the LLM help doesn't stop there: It can alleviate my indecisiveness and frustration with dealing with open-ended problems for personal projects, can recommend me project structure, can generate a bullet pointed lists to pretend that I work for a company where someone else creates the spec and I just follow it step by step like a good junior software engineer would do.

And yet all I did just postponed the inevitable exercise of a very useful mental habit: To navigate uncertainty, pause and reflect, plan, evaluate a trade-off or 2 here and there. And while there are other places and situations where I can exercise that behavior, the fact remains that my specific use of LLM removed that weight off my shoulders. I became objectively someone who builds his project ideas and makes progress in his Factorio playthrough, but the trade-off is I remain the same person who will duck and run the moment resistance happens, and succumb to the urge of either pushing "the thing" for tomorrow or ask chatGPT for help.

I cannot imagine how someone would claim that removing an exercise from my daily gym visit will not result in weaker muscles. There are so many hidden assumptions in such statements, and an excessive focus of results in "the new era where you should start now or be left behind" where nobody's thinking how this affects the person and how they ultimately function in their daily lives across multiple contexts. It's all about output, output, output.

How far are we from the day where people will say "well, you certainly don't need to plan a project, a factory layout, or even decide, just have chatGPT summarize the trade-offs, read the bullet points, and choose". We're off-loading portion of the research AND portion of the execution, thinking we'll surely be activating the neurosynapses in our brains that retains habits, just like someone who lifts 50% lighter weights at the gym will expect to maintain muscle mass or burn fat.

IhateAI - 7 hours ago

I refer to it as "Think for me SaaS", and it should be avoided like the plague. Literally, it will give your brain a disease we haven't even named yet.

It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.

Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.

Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).

I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.

You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)

Be wary!!!

everyone - 7 hours ago

Guy complains about self vibe coding.. stop doing it then!! Do you really think it's practical? Your job must be really easy if it is.

ychompinator - 6 hours ago

[dead]

hicsuntcp - 3 hours ago

[dead]

utopiah - 8 hours ago

Pre-processed food consumer complains about not cooking anymore. /s

... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.

eggsandbeer - 8 hours ago

[dead]

wetpaws - 8 hours ago

[dead]

bubbi - 5 hours ago

[dead]

whywhywhywhy - 3 hours ago

I feel tired working with AI much faster than I did when I used to code, dunno if it's just that I don't really need to think much at all other than keep in mind the broad plan and have an eye out if a red flag of the wrong direction shows in the transcript, don't even bother reading the code anymore since Opus 4.5 I haven't felt the need to.

Manually coding engaged my brain much more and somehow was less exhausting, kinda feels like getting out of bed and doing something vs lazing around and ending up feel more tired despite having to do less.