Appearing productive in the workplace

nooneshappy.com

478 points by diebillionaires 5 hours ago


wcfrobert - 3 hours ago

> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."

Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).

So now the "productivity-gain bottleneck" is people who still care enough to review manually.

proofofcontempt - 4 hours ago

What is described here closely resembles my experience too.

My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.

After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.

The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.

danaw - an hour ago

i have a strong suspicion that the most productive software teams that leverage llms to build quality software will use it for the following:

- intelligent autocomplete: the "OG" llm use for most developers where the generated code is just an extension of your active thought process. where you maintain the context of the code being worked on, rather than outsourcing your thinking to the llm

- brainstorming: llms can be excellent at taking a nebulous concept/idea/direction and expand on it in novel ways that can spark creativity

- troubleshooting: llms are quite good at debugging an issue like a package conflict, random exception, bug report, etc and help guide the developer to the root cause. llms can be very useful when you're stuck and you don't have a teammate one chair over to reach out to

- code review: our team has gotten a lot of value out of AI code review which tends to find at least a few things human reviewers miss. they're not a replacement for human code review but they're more akin to a smarter linting step

- POCs: llms can be good at generating a variety of approaches to a problem that can then be used as inspiration for a more thoughtfully built solution

these uses accelerate development while still putting the onus on the developers to know what they're building and why.

related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.

oxag3n - 3 hours ago

Software Engineering seems to be quite unique to enable this due to few factors:

* Many software engineers didn't do real engineering work during their entire careers. In large companies it's even harder - you arrive as a small gear and are inserted into a large mechanism. You learn some configuration language some smart-ass invented to get a promo, "learn" the product by cleaning tons of those configs, refactoring them, "fixing" results in another bespoke framework by adjusting some knobs in the config language you are now expert in. Five years pass and you are still doing that.

* There are many near-engineering positions in the industry. The guy who always told how he liked to work with people and that's why stopped coding, another lady who always was fascinated by the product and working with users. They all fill in the space in small and large companies as .*M

* The train is slow moving, especially in large companies. Commit to prod can easily span months, with six months being a norm. For some large, critical systems, Agentic code still didn't reach the production as of today.

Considering above, AI is replacing some BS jobs, people who were near-code but above it suddenly enjoy vibe-coding, their shit still didn't hit the fan in slow moving companies. But oh man, it looks like a productivity boom.

nlawalker - 4 hours ago

>People who cannot write code are building software. People who have never designed a data system are designing data systems. Most of it is not shipped; it is built, often for many hours, possibly shown internally with great vigor, used quietly, and occasionally surfaced to a client without much fanfare.

This made me think of How I ship projects at big tech companies[1], specifically "Shipping is a social construct within a company. Concretely, that means that a project is shipped when the important people at your company believe it is shipped."

[1] https://news.ycombinator.com/item?id=42111031

JohnMakin - 9 minutes ago

The “not helping experts” thing is a bit myopic. Everyone, no matter what a rockstar you are, has weak areas or areas of tedium that can be automated. For me, and it’s hindered me in my career in the past, was organizing a lot of tasks at once, communicating changes effectively across orgs (eg through jira), documentation, ticket management - this is a non concern now and the efficiency gain there has been incredible. The core things I do well, yea, it doesnt help a ton with other than can type way faster than I can (which is still really good).

If I’m having it do stuff I’m unfamiliar with, it does tend to do better than I would or steer me at least in a direction I can be more informed about making decisions.

ChrisMarshallNY - 3 hours ago

I spent most of yesterday, deleting and replacing a bunch of code that was generated by an LLM. For the most part, the LLM's assistance has been great.

For the most part.

In this case, it decided to give me a whole bunch of crazy threaded code, and, for the first time, in many years, my app started crashing.

My apps don't crash. They may have lots of other problems, but crashing isn't one of them. I'm anal. Sue me.

For my own rule of thumb, I almost never dispatch to new threads. I will often let the OS SDK do it, and honor its choice, but there's very few places that I find spawning a worker, myself, actually buys me anything more than debugging misery. I know that doesn't apply to many types of applications, but it does apply to the ones I write.

The LLM loves threads. I realized that this is probably because it got most of its training code from overenthusiastic folks, enamored with shiny tech.

Anyway, after I gutted the screen, and added my own code, the performance increased markedly, and the crashes stopped.

Lesson learned: Caveat Emptor.

rglover - 3 minutes ago

It's incredibly humorous to watch companies take a gift horse and drown it for sport.

john_strinlai - 5 hours ago

>I sat with it for a while, weighing whether to debate someone who was visibly copy-pasting verbatim from a model.

i have found some small amusement by responding in kind to people that do this (copy/pasting their ai output into my ai, pasting my ai response back). two humans acting as machines so that two machines can cosplay communicating like humans.

futureproofd - an hour ago

I've noticed early into AI adoption in the workplace that some colleagues took advantage of the technology by appearing to be hyper-proactive; New TODs weekly, fresh new refactoring ideas, novel ways to solve age-old problems with shiny new algorithms. Fast-forward to today, and this is occurring two-fold. Not only are they trying to appear more proactive, combining this with the fear of AI layoffs, they're creating solutions to problems before the problem has even been fully defined.

For example, I was tasked to look into a company-wide solution for a particular architectural problem. I thought delivering a sound solution would give me some kudos, alas, I wasn't fast enough. An intern had already figured it out and wrote a TOD. I find myself too tired to compete.

vachina - 4 hours ago

> Never ask a model for confirmation; the tool agrees with everyone.

Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.

Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.

smath - 23 minutes ago

Here is a solution to this problem I think: make an LLM. Summarize everything. If there is fluff then it should get dropped? Basically we only care about the relevant information content, regardless of the number of characters used - so we need a compressed representation

jdw64 - 4 hours ago

After reading this article, I can definitely feel how productivity rises inside organizations.

More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.

The argument is repetitive:

1. AI generates convincing-looking artifacts without corresponding judgment. 2. Organizations mistake those artifacts for progress. 3. Managers mistake volume for competence.

The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.

The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.

The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.

There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.

Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.

bambax - 3 hours ago

I intensely agree with everything that's being said in TFA; this however could be nuanced:

> Never ask a model for confirmation; the tool agrees with everyone

If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.

drowntoge - 3 hours ago

"Output-competence decoupling" is my new favorite keyword.

graphememes - 30 minutes ago

Instead of helping, the author fought against them, "from day one anyone could tell that the schemas were wrong", yet nobody helped him, and instead went to the vp and complained about them. sad. what a horrible place to work in

Weryj - 2 hours ago

I brought this up during our AI workshops, but I called it the “confident idiot”

Seeing the idea explored in such depth is great, I really am concerned about this.

giantg2 - 3 hours ago

The most productive people seem to be the ones who are skeptical of AI but found compelling cases to use them for and aren't afraid to correct them.

randusername - 3 hours ago

> The cost of producing a document has fallen to nearly zero; the cost of reading one has not, and is in fact rising, because the reader must now sift the synthetic context for whatever the document was originally about.

This resonates. It's a spectacular full-reversal kind of tragedy because it used to be asymmetric the other way. Author puts in 10 effort points compiling valuable information and reader puts in 1 effort points to receive the transmission.

juancn - 4 hours ago

AI can be (and often is) a confident incompetence amplifier.

darepublic - 4 hours ago

I was tasked with coming up with a solution in 5 weeks which took another firm six months to produce. Never used agentic coding so much before or knew my code less well. Requirements are garbage though ,vague and just "copy what these other guys did, but better". I tried for. Couple of the weeks to get better specs but eventually gave up and just started building stuff to present.

guizadillas - 5 hours ago

Sidenote: why is the post dated in the future? (May 28, 2026)

asdfman123 - 3 hours ago

AI is another development that drives me absolutely mad. It's like jet fuel for people who leave a trail of technical debt for people who care more about that sort of thing to try to clean up.

AI promises "you don't even need to understand the problem to get work done!" But the problem is doing the work is the how I understand problems, and understanding the problem is the bottleneck.

xXSLAYERXx - 2 hours ago

Who cares? I obviously didn't like the article.

> Schemes were all wrong

Why'd you let him run wild for two months? What software org would let anyone, even principle do that? Wouldn't the very first thing you'd do is review the guys schema? This reads like all the other snarky posts on HN about how everyone is punching above their pay grade and people who are much more advanced in some space just watch like two trains colliding.

I'll tell you what is productive in the workplace. Communication. That is it. Communicate and lift the guy up, give the guy a running start instead of chilling in the break room snarking with all your snarky co-workers.

dnnddidiej - 26 minutes ago

s/betray/portray/ ?

smokel - 4 hours ago

It would be nice if someone invented a mouse with a tiny motor inside, so I could put on sunglasses, rest my hand on the mouse, doze off, and still look like I'm working hard.

cwillu - 2 hours ago

We were promised GlaDOS, and were given Wheatley.

- 2 hours ago
[deleted]
sixie6e - 4 hours ago

So essentially, AI is exacerbating the Dunning-Kruger effect in society.

sergiotapia - 2 hours ago

> Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries.

I've been on the receiving end of this and it sucks. It shows lack of care and true discernment. Then you push back and again, you're arguing with Claude, not the person.

I don't know what the solution is here. :(

snozolli - 4 hours ago

Back around 2005, I worked with a guy who was trying to position himself as the go-to expert on the team. He'd always jump at the chance to explain things to QA and the support team. We'd occasionally hear follow-up questions from those teams and realize that he was just making things up.

He was also had a serious case of cargo-cult mentality. He'd see some behavior and ascribe it to something unrelated, then insist with almost religious fervor that things had to be coded in a certain way. He was also a yes-man who would instantly cave to whatever whim management indicated. We'd go into a meeting in full agreement that a feature being requested was damaging to our users, and he'd be nodding along with management like a bobble-head as they failed to grasp the problem.

Management never noticed that he was constantly misleading other teams, or that he checked in flaky code he found on the Internet that triggered multiple days of developer time to debug. They saw him as a highly productive team player who was always willing to "help" others.

He ended up promoted to management.

Anyway, my point is that management seems to care primarily about having their ego boosted, and about seeing what they perceive as a hard worker, even if that worker is just spinning his wheels and throwing mud on everyone else. I'm sure that AI is only going to exacerbate this weird, counter-productive corporate system.

- 3 hours ago
[deleted]
luxuryballs - 33 minutes ago

Well this unlocked a new fear, I can imagine all the similar “nests” of AI generated content out there being created right now, I am likely to have to untangle one some day, or at least break it to someone that it’s garbage, almost as if the AI itself has built a nest and is hoarding artifacts but it’s actually the human deciding to bundle up the slop and put a bow on it.

northfield27 - 41 minutes ago

Excellent article! Aptly describes what I have been feeling and thinking about the claims many AI optimists make.

---

> He produced a great deal of code, [...] He could not, when asked, explain how any of it actually worked. [...] When opinions were voiced even as high as a V.P., he fought back.

AI has democratized coding, but people have yet to understand that it takes expertise to actually design a system that can handle scale. Of course, you can build a PoC in a few hours with Claude code, but that wouldn't generate value.

The reason why we see such examples in the workplace is because of the false marketing done by CEOs and wrapper companies. It just gives people a false hope that "they can just build things" when they can only build demos.

Another reason is that the incentives in almost every company have shifted to favour a person using AI. It's like the companies are purposefully forcing us to use AI, to show demand for AI, so that they can get a green signal to build more data centers.

---

> So you have overconfident, novices able to improve their individual productivity in an area of expertise they are unable to review for correctness. What could go wrong?

This is one much-needed point to raise.

I have many people around me saying that people my age are using AI to get 10x or 100x better at doing stuff. How are you evaluating them to check if the person actually improved that much?

I have experienced this excessively on twitter since last few months. It is like a cult. Someone with a good following builds something with AI, and people go mad and perceive that person as some kind of god. I clearly don't understand that.

Just as an example, after Karpathy open-sourced autoresearch, you might have seen a variety of different flavors that employ the same idea across various domains, but I think a Meta researcher pointed out that it is a type of search method, just like Optuna does with hyperparameter searching.

Basically, people should think from first principles. But the current state of tech Twitter is pathetic; any lame idea + genAI gets viral, without even the slightest thought of whether genAI actually helps solve the problem or improve the existing solution.

(Side note: I saw a blog from someone from a top USA uni writing about OpenClaw x AutoResearch, I was like WTF?! - because as we all know, OpenClaw was just a hype that aged like milk)

---

> The slowness was not a tax on the real work; the slowness was the real work.

Well Said! People should understand that learning things takes time, building things takes time, and understanding things deeply takes time.

Someone building a web app using AI in 10 mins is not ahead but behind the person who is actually going one or two levels of abstractions deeper to understand how HTML/JS/Next.js works.

I strongly believe that the tech industry will realise this sooner or later that AI doesn't make people learn faster, it just speeds up the repetitive manual tasks. And people should use the AI in that regard only.

The (real) cognitive task to actually learn is still in the hands of humans, and it is slow, which is not a bottleneck, but that's just how we humans are, and it should be respected.

ahmedmostafa16 - 3 hours ago

[dead]

micoul81 - 3 hours ago

i need karma

fallinditch - 4 hours ago

Increasingly, there is a disconnect between established operational/corporate systems and the new AI-enhanced powers of individual workers.

The over-production of documents is just one symptom. It's clear that organizations are struggling to successfully evolve in the era of worker 'superpowers'. Probably because change is hard!

Perhaps this is indicative of a failure of imagination as much as anything? The AI era is not living up to its potential if workers are given superpowers, but they are not empowered to use them effectively.

Empowered teams and individuals have more accountability and ownership of business outcomes - this points to a need for flatter hierarchies and enlightened governance, supported by appropriate models of collaboration and reporting (AI helps here too!).

In the OP article the writer IMHO reached the wrong conclusion about their colleague who built a system that didn't work - this sounds like the sort of initiative that should be encouraged, and perhaps the failure here points to a lack of technical support and oversight of the colleague's project.

Now more than ever organizations need enlightened leadership who have flexible mindsets and who are capable to envisioning and executing radicle organizational strategies.