Nobody knows how the whole system works

surfingcomplexity.blog

161 points by azhenley 12 hours ago


rorylaitila - 3 hours ago

There are many layers to this. But there is one style of programming that concerns me. Where you neither understand the layer above you (why the product exists and what the goal of the system is) nor the layer below (how to actually implement the behavior). In the past, many developers barely understood the business case, but at least they understood how to translate into code, and could put backpressure on the business. Now however, it's apparently not even necessary to know how the code works!

The argument seems to be, we should float on a thin lubricant of "that's someone else's concern" (either the AI or the PMs) gliding blissfully from one ticket to another. Neither grasping our goal nor our outcome. If the tests are green and the buttons submit, mission accomplished!

Using Claude I can feel my situational awareness slipping from my grasp. It's increasingly clear that this style of development pushes you to stop looking at any of the code at all. My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?

planb - 2 hours ago

This article is about people using abstractions without knowing how they work. This is fine. This is how progress is made.

But someone designed the abstraction (e.g. the Wifi driver, the processor, the transistor), and they made sure it works and provides an interface to the layers above.

Now you could say a piece of software completely written by a coding agent is just another abstraction, but the article does not really make that point, so I don't see what message it tries to convey. "I don't understand my wifi driver, so I don't need to understand my code" does not sound like a valid argument.

matheus-rr - 2 hours ago

The dependency tree is where this bites hardest in practice. A typical Node.js project pulls in 800+ transitive dependencies, each with their own release cadence and breaking change policies. Nobody on your team understands how most of them work internally, and that's fine - until one of them ships a breaking change, deprecates an API, or hits end-of-life.

The anon291 comment about interface stability is exactly right. The reason you don't need to understand CPU microarchitecture is that x86 instructions from 1990 still work. Your React component library from 2023 might not survive the next major version. The "nobody knows how the whole system works" problem is manageable when the interfaces are stable and well-documented. It becomes genuinely dangerous when the interfaces themselves are churning.

What I've noticed is that teams don't even track which of their dependencies are approaching EOL or have known vulnerabilities at the version they're pinned to. The knowledge gap isn't just "how does this work" - it's "is this thing I depend on still actively maintained, and what changed in the last 3 releases that I skipped?" That's the operational version of this problem that bites people every week.

sgarland - an hour ago

> “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? [Paraphrasing]: interrupts, 802.11ax modulation scheme, QAM, memory models, garbage collection, field effect transistors...

To a reasonable degree, yes, I can. I am also probably an outlier, and the product of various careers, with a small dose of autism sprinkled in. My first career was as a Submarine Nuclear Electronics Technician / Reactor Operator in the U.S. Navy. As part of that training curriculum, I was taught electronics theory, troubleshooting, and repair, which begins with "these are electrons" and ends with "you can now troubleshoot a VMEbus [0] Motorola 68000-based system down to the component level." I also later went back to teach at that school, and rewrote the 68000 training curriculum to use the Intel 386 (progress, eh?).

Additionally, all submariners are required to undergo an oral board before being qualified, and analogous questions like that are extremely common, e.g. "I am a drop of seawater. How do I turn the light on in your rack?" To answer that question, you end up drawing (from memory) an enormous amount of systems and connecting them together, replete with the correct valve numbers and electrical buses, as well as explaining how all of them work, and going down various rabbit holes as the board members see fit, like the throttling characteristics of a gate valve (sub-optimal). If it's written down somewhere, or can be derived, it's fair game. And like TFA's discussion about Brendan Gregg's practice of finding someone's knowledge limit, the board members will not stop until they find something you don't know - at which point you are required to find it out, and get back to them.

When I got into tech, I applied this same mindset. If I don't know something, I find out. I read docs, I read man pages, I test assumptions, I tinker, I experiment. This has served me well over the years, with seemingly random knowledge surfacing during an incident, or when troubleshooting. I usually don't remember all of it, but I remember enough to find the source docs again and refresh my memory.

0: https://en.wikipedia.org/wiki/VMEbus

virgilp - 10 hours ago

That's not how things work in practice.

I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.

Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.

bjt - 9 hours ago

The claimed connections here fall apart for me pretty quickly.

CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.

latexr - 2 hours ago

> This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.

That doesn’t make it OK. This is like being stuck in a room whose pillars are starting to deteriorate, then someone comes along with a sledgehammer and starts hitting them and your reaction is to shrug and say “ah, well, the situation is bad and will only get worse, but the roof hasn’t fallen on our heads yet so let’s do nothing”.

If the situation is untenable, the right course of action is to try to correct it, not shrug it off.

analog31 - 2 hours ago

Granted I'm not a software developer, so the things I work on tend to be simpler. But the people I know who are recognized for "knowing how the whole thing works" are likely to have earned that distinction, not necessarily by actually knowing how it works but:

1. The ability and interest to investigate things and find out how they work, when needed or desired. They are interested in how things work. They are probably competent in things that are "glue" in their disciplines, such as math and physics in my case.

2. The ability to improvise an answer when needed, by interpolating across gaps in knowledge, well enough to get past whatever problem is being solved. And to decide when something doesn't need to be understood.

erelong - 7 minutes ago

Reminds me of a short writing "I, Pencil"

The problem is education, and maybe ironically AI can assist in improving that

I've read a lot about programming and it all feels pretty disorganized; the post about programmers being ignorant about how compilers work doesn't sound surprising (go to a bunch of educational programming resources and see if they cover any of that)

It sounds like we need more comprehensive and detailed lists

For example, with objections to "vibe coding", couldn't we just make a list of people's concerns and then work at improving AI's outputs which would reflect the concerns people raise? (Things like security, designs to minimize tech debt, outputting for rradability if someone does need to manually review the code in the future, etc.?)

Incidentally this also reminds me of political or religious stances against technology, like the Amish take for example, as the kind of ignorance of and dependence on processes out of our control discussed seem to be inherent qualities of technological systems as they grow and become more complex.

cbdevidal - 3 hours ago

This also applies to other things. No one person knows how to make a pencil.

Three minute video by Milton Friedman: https://youtu.be/67tHtpac5ws?si=nFOLok7o87b8UXxY

mojuba - 8 hours ago

> AI will make this situation worse.

Being an AI skeptic more than not, I don't think the article's conclusion is true.

What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.

Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.

LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.

LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.

mamp - 10 hours ago

Strange article. The problem isn’t that everyone doesn’t know how everything works, it’s that AI coding could mean there is no one who knows how a system works.

markbao - 19 minutes ago

There’s a difference between abstracting away the network layer and not understanding the business logic. What we are talking about with AI slop is not understanding the business logic. That gets really close to just throwing stuff at the wall and seeing what works instead of a systematic, reliable way to develop things that have predictable results.

It’s like if you are building a production line. You need to use a certain type of steel because it has certain heat properties. You don’t need to know exactly how they make that type of steel. But you need to know to use that steel. AI slop is basically just using whatever steel.

At every layer of abstraction in complexity, the experts at that layer need to have a deep understanding of their layer of complexity. The whole point is that you can rely on certain contracts made by lower layers to build yours.

So no, just slopping your way through the application layer isn’t just on theme with “we have never known how the whole system works”. It’s ignoring that you still have a responsibility to understand the current layer where you’re at, which is the business logic layer. If you don’t understand that, you can’t build reliable software because you aren’t using the system we have in place to predictably and deterministically specify outputs. Which is code.

css_apologist - 21 minutes ago

Yes, but the person who understands a lot of the system is invaluable

PandaStyle - 9 hours ago

Perhaps a dose of pragmatism is needed here?

I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."

I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc

Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.

I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)

To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.

I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.

vineethy - 2 hours ago

There’s plenty of people that know the fundamentals of the system. It’s a mistake to think that understanding specific technical details about an implementation is necessary to understand the system. It would make more sense to ask questions about whether someone could conceivably build the system from scratch if they have to. There’s plenty of people that have worked in academic fabs that have also written verilog and operating systems and messed with radios.

tjchear - 9 hours ago

I take a fairly optimistic view to the adoption of AI assistants in our line of work. We begin to work and reason at a higher level and let the agents worry about the lower level details. Know where else this happens? Any human organization that existed, exists, and will exist. Hierarchies form because no one person can do everything and hold all the details in their mind, especially as the complexity of what they intend to accomplish goes up.

One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.

camgunz - 6 hours ago

Get enough people in the room and they can describe "the system". Everything OP lists (QAM, QPSK, WPA whatever) can be read about and learned. Literally no one understands generative models, and there isn't a way for us to learn about their workings. These things are entirely new beasts.

wtetzner - 2 hours ago

I think a lot of people have a fear of AI coding because they're worried that we will move from a world where nobody understands how the whole system works, to a world where nobody knows how any of it works.

youarentrightjr - 10 hours ago

> Nobody knows how the whole system works

True.

But in all systems up to now, for each part of the system, somebody knew how it worked.

That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.

gmuslera - 6 hours ago

It is not about having infinite width and depth of knowledge. Is about abstracting at the right level for the components are relevant enough and can assume correctness outside the focus of what you are solving.

Systems include people, that make their own decisions that affect how they work and we don’t go down to biology and chemistry to understand how they make choices. But that doesn’t mean that people decisions should be fully ignored in our analysis, just that there is a right abstraction level for that.

And sometimes a side or abstracted component deserves to be seen or understood with more detail because some of the sub components or its fine behavior makes a difference for what we are solving. Can we do that?

_kuno - an hour ago

"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead

whytaka - 10 hours ago

But people are expected to understand the part of the system they are responsible for at the level of abstraction they are being paid to operate.

This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.

CrzyLngPwd - 2 hours ago

Oh so many times over the decades, having to explain to a dev why iterating over many things and performing a heavy task like a DB query, will result in bad things happening...all because they don't really comprehend how things work.

tosti - 9 hours ago

Not just tech.

Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.

mrkeen - 9 hours ago

  Adam Jacob
  It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened. 
This post just doubled down without presenting any kind of argument.

  Bruce Perens
  Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.
esafak - 3 hours ago

It's called specialization. Not knowing everything is how we got this far.

dizhn - 8 hours ago

Let me make it worse. Much worse. :)

https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)

shevy-java - 8 hours ago

Adam Jacob's quote is this:

"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."

It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.

He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.

snyp - 2 hours ago

Script kiddies have always existed and always will.

psychoslave - 9 hours ago

To be fair, I don't know how a living human individual work, let alone how they actually work in society. I suspect I'm not alone in this case.

So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.

knorker - 40 minutes ago

I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.

But what is not possible is to understand all these levels at the same time. And that has many implications.

Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.

spenrose - an hour ago

“Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.”

mhog_hn - 9 hours ago

It is the same with the global financial system

mychael - an hour ago

It's strange to believe that Twitter/X has fallen. Virtually every major character in software, AI and tech is active on X. The people who are actually building the tools that we discuss everyday post on X.

LinkedIn is weeks/months behind topics that originate from X. It suggests you might be living in a bubble if you believe X has fallen.

landpt - 2 hours ago

The pre-2023 abstractions that power the Internet and have made many people rich are the sweet spot.

You have to understand some of the system, and saying that if no one understands the whole system anyway we can give up all understanding is a fallacy.

Even for a programming language that is criticized for a permissive spec like C you can write a formally verified compiler, CompCert. Good luck doing that for your agentic workflow with natural language input.

Citing a few manic posts from influencers does not change that.

zhisme - 6 hours ago

what a well written article. That's actually a problem. Time will come and hit the same way it has done to aqueduct, like lost technology that no one knows how they have worked in details. Maybe it is just how engineering evolution works?

paulddraper - an hour ago

Understand one layer above (“why”) and one layer below (“how”).

Then you know “what” to build.

amelius - 8 hours ago

Wikipedia knows how it all works, and that's good enough in case we need to reboot civilization.

ForHackernews - 2 hours ago

I think there's a difference between "No one understands all levels of the system all the way down, at some point we all draw a line and treat it as a black-box abstraction" vs. "At the level of abstraction I'm working with, I choose not to engage with this AI-generated complexity."

Consider the distinction between I don't know how the automatic transmission in my car works, vs. I never bothered to learn the meanings of the street signs in my jurisdiction.

Atlas667 - 2 hours ago

This is a non-discussion.

You have to know enough about underlying and higher level systems to do YOUR job well. And AI cannot fully replace human review.

kartoshechka - 8 hours ago

engineers pay for abstractions with more powerful hardware, but can optimize at their will (hopefully). will ai be able to afford more human hours to churn through piles of unfamiliar code?

fedeb95 - 8 hours ago

why does the author imply not knowing everything is a bad thing? If you have clear protocol and interfaces, not knowing everything enables you to make bigger innovations. If everything is a complex mess, then no.

sciencejerk - 8 hours ago

We keep delegating knowledge of the natural, physical world for temporary, rapidly-changing knowledge of abstractions and software tools, which we do not control (now LLM cloud tools).

The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.

foxes - 3 hours ago

Isn't ceding all power to AIs run by tech companies kinda the opposite - if we have to have AI everywhere? Now no one knows how anything works (instead of everyone knowing a tiny bit and all working together), and also everyone is just dependent on the people with all the compute.

cess11 - 7 hours ago

Yeah, it's not a problem that a particular person does not know it all, but if no one knows any of it except as a black box kind of thing, that is a rather large risk unless the system is a toy.

Edit: In a sense "AI" software development is postmodern, it is a move away from reasoned software development in which known axioms and rules are applied, to software being arbitrary and 'given'.

The future 'code ninja' might be a deconstructionist, a spectre of Derrida.

anthk - 3 hours ago

9front's manuals will teach you the basics, the actual basics of CS (plan9 intro if you know to adapt yourself, too). These are at /sys/doc. Begin with rc(1), keep upping the levels. You can try 9front in a virtual machine safely. There are instructions to get, download and set it up at https://9front.org .

Write servers/clients with rc(1) and the tools at /bin/aux, such as aux/listen. They already are irc clients and some other tools. Then, do 9front's C book from Nemo.

On floats, try them at 'low level', with Forth. Get Muxleq https://github.com/howerj/mux. Compile it:

          cc -O2 -ffast-math -o muxleq muxleq.c
          
Edit muxleq.fth, set the constants in the file like this:

      1 constant opt.multi      ( Add in large "pause" primitive )
      1 constant opt.editor     ( Add in Text Editor )
      1 constant opt.info       ( Add info printing function )
      0 constant opt.generate-c ( Generate C code )
      1 constant opt.better-see ( Replace 'see' with better version )
      1 constant opt.control    ( Add in more control structures )
      0 constant opt.allocate   ( Add in "allocate"/"free" )
      1 constant opt.float      ( Add in floating point code )
      0 constant opt.glossary   ( Add in "glossary" word )
      1 constant opt.optimize   ( Enable extra optimization )
      1 constant opt.divmod     ( Use "opDivMod" primitive )
      0 constant opt.self       ( self-interpreter [NOT WORKING] )
Recompile your image:

       ./muxleq muxleq.dec < muxleq.fth > new.dec
New.dec will be your main Forth. Run it:

       ./muxleq new.dec
Get the book from the author, look at the code on how the Floating code it's implemented in software. Learn Forth with the Starting Forth book but for ANS forth, and Thinking Forth after doing Starting Forth. Finally, bacl to 9front, there's the 'cpsbook.pdf' too from Hoare on concurrent programming and threads. That will be incredibily useful in a near future. If you are a Go programmer, well, you are at home with CSP.

Also, compare CSP to the concurrent Forth switching tasks. It's great to compare/debug code in a tiny Forth on Subleq/Muxleq because if your code gets relatively fast, it will fly under GForth and due to constraints you will force yourself to be a much better programmer.

CPU's? Cache's? RAM latency? Muxleq/Subleq behaves nearly the same everywhere depending on your simulation speed. In order to learn, it's there. On real world systems, glibc, the Go runtime, etc, will take care of that making a similar outcome everyhere. If not, most of the people out there will be aware of stuff from SSE2 and up to NEON under ARM.

Hint: they already are code transpilers from Intel dedicated instructions to ARM ones and viceversa.

>How garbage collection works inside of the JVM?

No, but I can figure it a little given the Zenlisp one as a slight approximation. Or... you know, Forth, by hand. And Go which seems easiers and it doesn't need a dog slow VM trying to replicate what Inferno did in the 90's which far less resources.

anon291 - 9 hours ago

I don't like this thing where we dislike 'magic'

The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever

You don't need to know anything about hardware to properly use a CPU isa.

The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.

bsder - 9 hours ago

Sure, we have complex systems that we don't know how everything works (car, computer, cellphone, etc.) . However, we do expect that those systems behave deterministically in their interface to us. And when they don't, we consider them broken.

For example, why is the HP-12C still the dominant business calculator? Because using other calculators for certain financial calculations were non-deterministically wrong. The HP-12C may not have even been strictly "correct", but it was deterministic in the ways in wasn't.

Financial people didn't know or care about guard digits or numerical instability. They very much did care that their financial calculations were consistent and predictable.

The question is: Who will build the HP-12C of AI?

usgroup - 8 hours ago

[dead]