Why senior developers fail to communicate their expertise
nair.sh587 points by nilirl 19 hours ago
587 points by nilirl 19 hours ago
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
A non-trivial part of the big difference between the juniors that seem talented and "get it", and those that don't is precisely their ability to form accurate enough world models quickly. You can tell who is going at the "physics" of software and applying them, and who is just writing down recipes, and doesn't try to understand the nature of any of the steps.
It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
I was even as a junior the kind, who tried to understand the nature of the steps. I failed many times, but I learned from them all the time. I remember my mutable public static variables, and terrible small JavaScript apps. But every time when I did something like that, I tried to understand it. I knew that I failed. Sometimes it took me a year or more (like when I first encountered React about a decade ago, I immediately knew why some of my apps failed with architecture previously).
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
I wouldn't really try to equate arbitrary job titles awarded based on tenure with actual expertise; titles aren't consistently applied across the industry, or awarded on conditions other than actual merit.
There are a lot of very young developers who have less years of experience than me who have tons more expertise than me.
The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
I've always had excellent model building functionality for abstractions and got the "physics" of a subject rather quickly, be it economics, biology, certain mathematical subjects and more.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
I have the opposite experience. Goes to show the difference between people.
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
This resonates. Tips on how to build this skill?
Put yourself in a position where it is your problem/responsibility, where you cannot depend on another to do it for you. You'll be learning every day.
Fail, and try to understand why. Don't be quick with the answer. Sometimes it takes years. But it's crucial to want to improve, and recognize when the answer is in front of you.
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
No who you replied to, but practice. Deliberate practice; not just writing the same apps over and over, but instead challenging yourself with new projects. Build things from scratch, from documentation or standards alone. Force yourself to understand all the little details for one specific problem.
By complete coincidence, yesterday I came across this link to an article Peter Naur wrote in 1985 (https://pages.cs.wisc.edu/~remzi/Naur.pdf) which I haven't been able to stop thinking about.
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
Isn't that interesting? The job of exploring a theory or model to such an extent that it can be expressed in computer code always seems to fall on the shoulders of a software developer. Other people can write specifications and requirements all day long, but until a software developer has tackled the problem, the theory probably hasn't been explored well enough yet to express clearly in computer code. It feels like software developers are scientists who study their customers' knowledge domains.
> It feels like software developers are scientists who study their customers' knowledge domains.
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
> so stifled when an e.g. product manager
Another facet of this is my annoyance at other developers when they persistently incurious about the domain. (Thankfully, this has not been too common.)
I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
This is why at my current place we are not supposed to do any dev without an SME on the call. We do the development and share the screen and get immediate feedback as we are working in real time! It's great.
Agree 100%.
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
Sorry this is just the interior trapped nonsense that engineers find themselves in. Please touch grass
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
I agree so much with the both of you, to the point it's difficult to avoid cognitive dissonance one way or the other.
Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.
There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're doers and builders, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're decoding the fundamental invariants of the universe.
I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
You seem to be assuming a certain org structure with very clear, specialized roles. Many teams do not have this, and engineers are already Product Engineers. It sometimes even makes sense (whenever engineers dogfood their product, startups, or if it is a product targeting other engineers) and is not just a budget/capacity issue.
Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.
Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
We keep seeing things like cryptic error messages shown to end users simply because of the disconnect between the programmer and the end user.
If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.
Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.
If there were indeed perfect product managers, there would no need for product support.
It's interesting that the way you describe it, the world model itself is _not_ just a collection of words in our minds, and I have a small theory of my own that "thoughts" in our brains aren't actually words at all (otherwise animals which don't talk wouldn't be able to make complex decisions?), and the words that we "hear" in our heads and which we perceive as our thoughts are just a rough translation of these thoughts into words, they aren't thoughts themselves. It is also why it's sometimes really hard to put complex (but correct) thoughts into words, and especially hard to adequately compare complex ideas during a regular conversation: on the surface a lot of ideas (especially in software engineering) "sound" good, but they're actually terrible. And there's no better way to communicate ideas than to put them into words, which is probably what makes good software engineering extremely difficult.
Or maybe I'm just a little bit insane. Or both.
Obligatory link to a great podcast that has a great episode covering this paper: https://pca.st/episode/dfc024c8-31f8-4387-b301-7a4f77132b74
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
I keep saying this is the single most important article to consider when talking about AI assisted software building. Everyone should read it. The question should always be: is a human building a theory of the software, or is does only AI understand it? If it's the latter, it is certainly slop.
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
>their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
I think that this is actually a good thing. If everyone had the same internal world model, we would have very little innovation.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
I agree. It's pretty easy to train based on facts, and even experiences. And learners can often take things in unexpected directions.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
Yeah, you can't get it out in "one session of conversation", but you definitely can under a different... context.
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
Is this a quote from somewhere?
Yes, it's from this textbook: https://hci.stanford.edu/courses/cs147/2022/au/readings/rest...
> An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities.
"Transmissionism" is a term I've seen to describe this
this is why I only communicate in poetry
complexity is
not what you believe it is
please try listening
So cool. One reading is “complexity is not what you believe it is”. Another is “complexity is”… “not what you believe it is”. Seems similar but the difference is subtle. Even the “please try listening” line changes in both versions. One is confrontational, the other is empathetic.
Agreed. "complexity is" as a full sentence followed by "not what you believe it is" has a fundamentally different meaning.
Very cool
Reminded me of a colleague who wrote his email replys as haiku. It got old pretty quickly.
Like an old colleague
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
I'd say, on averaged, it's 50% what you say and 50% communication issues.
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
good things llms solve this problem by assuming everything can be put into words and then convincing the world this is true.
You might be encouraged by this then -- it seems some leading AI researchers agree with you: https://www.technologyreview.com/2026/01/22/1131661/yann-lec...
That is 100% NOT what he is doing.
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
Another part of the equation is practice.
Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)
I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.
I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.
To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.
When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
This is surprisingly close to a personal theory I've been working on. I've been describing how to use AI to people as engaging the world model in their head, organization, or software.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
Correct. One just has to realize that the cost of communication (and the context/memory lost along the way to train that understanding) is often just far higher than anyone has patience for. To fully understand the expert, they must become the expert. (or at least a hell of a lot closer than they were)
This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
> AI can blow you out of the water at knowing more facts
Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.
> and they need to have the right project that provides the opportunity to learn what needs to be learnt.
It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
“Cursive knowledge”, as an old boss told me. Was incredibly ironic when he leaned into my misunderstanding.
yep, as I was exploring in https://danieltan.weblog.lol/2026/05/dunning-kruger-and-the-... , the expert pays the "communication tax" to dumb down concepts that the listener can understand. There is a gap between domain understanding and what is being conveyed that is similar for human-llm interactions as well.
Great points. Words allow one to communicate an approximation of part of what one knows.
Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.
Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.
Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
This sounds like a whole lot of copium from devs who don't want to bother with the effort of just writing stuff down, ie good documentation practices...
Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").
Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.
Well here's an engineering problem figure out how to mentor 10x the number of juniors
As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
I came to say somethign simular actually.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
> There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
This is what I was thinking - I'd say the biggest step up a developer can make is to recognize that sometimes you need a bit of one approach, sometimes a bit of another one.
Sometimes minimalism is the way, and you need to wonder if the pain, workload or lacking capabilities and features are problematic. Or, sometimes adding the smallest possible thing is a good way, as long as we don't paint ourself into a corner and enable learning and accumulating information of what we actually need.
Sometimes buying a thing is a good way, if you can find a good vendor and a tool fitting your use case and especially if the effort of doing it on your own is high. This commonly occurs in security, because keeping up to date with the ongoing vulnerability and threat landscape can be a full job on its own.
And sometimes adding something bigger is the way, if the effort of maintaining it are less than the effort and pain incurred by not having it. Or if we can ramp up the effort of the thing incrementally, while reaping benefits along the way. This can be validated often by doing a small thing.
What the AI will do in my opinion is to push the bar more in this direction. Cozily hacking CRUD-Code in a web server together most likely won't be enough in a year or two for the average development job.
That doesn't sound as good in meetings. The person who can cut scope and get everyone to the "we did it" back patting phase makes everyone feel warm and cozy.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
This is where good leadership in the dev team is needed.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
For a large enough problem you need a combination of enough skill (to do the job), enough foresight (to know what likely will go wrong and how much error budget you need), and skin in the game (so you dont just cut things that sound good but instead what is truly needed) - if you don't have all three of these you usually are just talking out of your ass.
both of these things are equally important. every change will annoy somebody. every change breaks somebody's workflow.
preventing the unnecessary changes can help you get the political capital in your org to push through the changes that really need to happen.
Congrats on being the third top-level comment at this hour, and the first one who seems to have read more than just the headline.
A sort of survivor bias. A VP ordered to use elastic search, because it worked well at his company before. Turned out it worked well for us. Listen to the VP to make technical decisions. And use elastic search.
Reminds me when the ELK stack was called just ELK (idek what it is now) we had a server we put it on, and after making the additional dashboards my manager wanted, we learned the limits of ES / ELK. It needs a ridiculous amount of memory, because it will shove everything in memory. Same thing when I learned that MongoDB indexing puts every item in memory as well, which is a yikes, why would you not want to index?
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
There's no high performance database that wont take all of your memory (at least for size of data) if you let it.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
MySQL doesnt eat up all 8GB of my system when I need to query a table with indexed values, MongoDB seems to eat it all up.
You paid one hundred bucks for that eight gb of ram, do you really want it to just sit there unused?
No, but my manager was wondering why our website was slowing to a crawl.
Is the DB on the same host as the web server?
It is more likely they did not leave enough overhead for the host operating system, which is a classic issue.