Outsourcing thinking

erikjohannes.no

269 points by todsacerdoti 3 days ago


3371 - 3 days ago

Ever since Google experimented LLM in Gmail it bothers me alot. I firmly believe every word and the way you put them together portrays who you are. Using LLM for direct communication is harmful to human connections.

sumul - 2 days ago

This part really caught my attention (along with the rest of the preceding paragraph):

> Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.

I agree with the article completely, as it effectively names an uneasy feeling of hesitation I’ve had all along with how I use LLMs. I have found them tremendously valuable as sounding boards when I’m going in circles in my own well-worn cognitive (and sometimes even emotional) ruts. I have also found them valuable as research assistants, and I feel grateful that they arrived right around the time that search engines began to feel all but useless. I haven’t yet found them valuable in writing on my behalf, whether it’s prose or code.

During my formal education, I was very much a math and science person. I enjoyed those subjects. They came easily to me, which I also enjoyed. I did two years of liberal arts in undergrad, and they kicked my butt academically in a way that I didn’t realize was possible. I did not enjoy having to learn how to think and articulate those thoughts in seminars and essays. I did not enjoy the vulnerability of sharing myself that way, or of receiving feedback. If LLMs had existed, I’m certain I would have leaned hard on them to get some relief from the constant feeling of struggle and inadequacy. But then I wouldn’t have learned how to think or how to articulate myself, and my life and career would have been significantly less meaningful, interesting, and satisfying.

b00ty4breakfast - 3 days ago

What I am worried about (and it's something about regular internet search that has worried me for the past ~10 years or so) is that, after they've trained a generation of folks to rely on this tech, they're going to start inserting things into the training data (or whatever the method would be) to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.

jonmagic - 2 days ago

I really liked this piece, and I share the concern, but I think “outsourcing thinking” is slightly the wrong frame.

In my own work, I found the real failure mode wasn’t using AI, it was automating the wrong parts. When I let AI generate summaries or reflections for me, I lost the value of the task. Not because thinking disappeared, but because the meaning-making did.

The distinction that’s helped me is: - If a task’s value comes from doing the thinking (reflection, synthesis, judgment), design AI as a collaborator, asking questions, prompting, pushing back. - If the task is execution or recall, automate it aggressively.

So the problem isn’t that we outsource thinking, it’s that we sometimes bypass the cognitive loops that actually matter. The design choice is whether AI replaces those loops or helps surface them.

I wrote more about that here if useful: https://jonmagic.com/posts/designing-collaborations-not-just...

camgunz - 3 days ago

This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.

gemmarate - 3 days ago

The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.

We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.

preston-kwei - 3 days ago

The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.

OsamaJaber - 3 days ago

This is something I noticed myself. I let AI handle some of my project and later realized I didn't even understand my own project well enough to make decisions about it :)

nsainsbury - 3 days ago

I actually wrote up quite a few thoughts related to this a few days ago but my take is far more pessimistic: https://www.neilwithdata.com/outsourced-thinking

My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.

pveierland - 3 days ago

One bothersome aspect of generative assistance for personal and public communication not mentioned is that it introduces a lazy hedge, where a person can always claim that "Oh, but that was not really what I meant" or "Oh, but I would not express myself in that way" - and use it as a tool to later modify or undo their positions - effectively reducing honesty instead of increasing it.

jsattler - 2 days ago

Very interesting, thanks for sharing this. After reading Karpathy's recent tweet about "A few random notes from claude coding quite [...]" it got me thinking a lot about offloading thinking and more specifically failure. Failure is important for learning. When I use AI and they make mistakes, I often tend to blame the AI and offload the failure. I think this post explores similar thoughts, without talking much about failure. It will be interesting to see the long-term effects.

reducesuffering - 3 days ago

See Scott Alexander’s The Whispering Earring (2012):

https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...

throwaway2037 - 2 days ago

For those unaware, this phrase: "The lump of cognition fallacy" is a derivative of the classic economic fallacy: Lump of Labor Fallacy (or Lump of Jobs)

Google AI describes it as:

    This is the most common form, often used in debates about technology, immigration, or retirement. 
    Definition: The belief that there is a set, finite amount of work to be done in an economy.
    The Fallacy: Assuming that if one person works more, or if a machine does a job, there is less work left for others.
    Reality: An increase in labor or technology (like AI or automation) can increase productivity, lower costs, and boost economic activity, which actually creates more demand for labor.
    Examples:
    "If immigrants come to this country, they will take all our jobs" (ignoring that immigrants also consume goods and create demand for more jobs).
    "AI will destroy all employment" (ignoring that technology typically shifts the nature of work rather than eliminating it).
jemiluv8 - 3 days ago

Outsourcing to thinking is exactly what I tell our developers. They are hired to do the kind of thinking I’d rather not do.

Animats - 2 days ago

The author says it's too long. So let's tighten it up.

A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.

My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".

Masley writes that it's "bad to outsource your cognition when it:"

- Builds tacit knowledge you'll need in future.

- Is an expression of care for someone else.

- Is a valuable experience on its own.

- Is deceptive to fake.

- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.

How we choose to use chatbots is about how we want our lives and society to be.

That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.

andsoitis - 3 days ago

Some of humanity’s most significant inventions are language (symbolic communication), writing, the scientific method, electricity, the computer.

Notice something subtle.

Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.

This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.

oktcho - 2 days ago

We are going to be able to think plenty about other things than what we are doing, yes. That is called anxiety.

beaker52 - 3 days ago

I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.

js8 - 2 days ago

I think we can make an analogy with our own brains, which have evolutionary older parts (limbic system) and evolutionary younger parts (neocortex). Now AI, I think it will be our new neocortex, another layer to our brain. And you can see limbic system didn't "outsource" thinking to neocortex - it's still doing it; but it can take (mostly good) advice from it.

Applying this analogy to human relationships - neocortex allowed us to be more social. Social communication with limbic system was mostly "you smell like a member of our species and I want to have sex with you". So having neocortex expanded our social skills to having friends etc.

I think AI will have a similar effect. It will allow us to individually communicate with large amount of other people (millions). But it will be a different relationship than what we today call "personal communication", face to face, driven by our neocortex. It will be as incomprehensible for our neocortex as our language is incomprehensible for the limbic system.

wut-wut - 3 days ago

Interesting read..

To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..

0xbadcafebee - 3 days ago

How many of you know how to do home improvement? Fix your own clothes? Grow your own food? Cook your own food? How about making a fire or shelter? People used to know all of those things. Now they don't, but we seem to be getting along in life fine anyway. Sure we're all frightened by the media at the dangers lurking from not knowing more, but actually our lives are fine.

The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.

keepamovin - 3 days ago

One perspective I’m circling right now about this topic is that maybe we’re coming to realize as a society that what we considered intelligence (or symbolic intelligence whatever you wanna call that thing that we measure with traditional IQ tests, verbal fluency, etc) is actually a far less essential cognitive aspect to us as humans then we had previously assumed and is in fact, far more mechanical in nature than we had formerly believed.

This ties with how I sometimes describe current generation AI as a form of mechanized intelligence: like Babbage’s calculating machine, but scaled up to be able to represent all kinds of classes of things.

And in this perspective that I’m circling these days where I’m currently coming down on it is maybe the effect of this realization will be something like the dichotomy outlined in the Dune series: namely, that between mechanized intelligence embodied by mentats and the more intuitive and prescient aspects of cognition embodied by the Benni Jessarit and Paul’s lineage.

A simple but direct way to describe this transition in perspective may be that we come to see what we formally thought of as intelligence in the West/reductive tradition as a form of mechanized calculation that it’s possible to outsource to automatic non-biological processes, and we start to lean in more deeply to the more intuitive and prescient aspects of cognition.

One thing I’m reminded of is how Indian yogic texts describe various aspects of mind.

I’m not sure if it’s a one-to-one mapping because I’m not across that material but merely the idea of distinguishing between different aspects of mind is something with precedent; and central to that is the idea of removing association between self identity and the aspects of mind.

And so maybe one of the effects for us as a society will be something akin to that.

simianwords - 2 days ago

Not to nitpick but I find his point on automating vacation planning on AI so silly.

Apparently he think of planning a vacation as some artistic expression.

techblueberry - 3 days ago

A lot of this stuff depends on how a person chooses to engage, but my contrarian take is that actually throughout history whenever anyone said X technology will lead to the downfall of humanity for y reasons, that take was usually correct.

The article he references gives this example:

“Is it lazy to watch a movie instead of making up a story in your head?”

Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.

For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.

When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.

kaffekaka - 2 days ago

Great blog post, and I fully agree. The human touch in communication and reflection can not be emphasized enough.

slfreference - 3 days ago

Distributed verification. 8 billions of us can divide up the topics and subjects and pool together our opinions and best conclusions.

Isamu - 2 days ago

Surely we can do better than reading TFA and manually commenting on it.

jfengel - 3 days ago

Social media has given me a rather dim view of the quality of people's thinking, long before AI. Outsourcing it could well be an improvement.

acedTrex - 2 days ago

> The category of writing that I like to call "functional text", which are things like computer code and pure conveyance of information (e.g., recipes, information signs, documentation), is not exposed to the same issues.

I hate this take, computer code is just as rich in personality as writing. I can tell a tremendous amount about what kind of person someone is solely based off their code. Code is an incredibly personal expression of ones mental state, even if you might not realize it. LLMs have dehumanized this and the functional outcomes become FAR more unpredictable.

newzino - 2 days ago

[dead]

lighthouse1212 - 3 days ago

[dead]

artur_oliver - a day ago

[dead]

MORPHOICES - 3 days ago

[dead]

nine_k - 3 days ago

Thinking developed naturally as a tool that helps our species to stay dominant on the planet, at least on land. (Not by biomass but by the ability to control.)

If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.

Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.

I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.