The Missing Layer
yagmin.com29 points by lubujackson 3 hours ago
29 points by lubujackson 3 hours ago
> Trying to automate fixes is reminiscent of the staircase paradox, where you keep reshaping a problem in a way that seems productive but never actually improves precision. It feels like you are "approaching the limit," but no matter how small you make the steps, the area never changes.
The area approaches 0.5hw - it's the length of the line that doesn't change
Spec driven development is great in theory, but it has a lot of issues, I rant about them here: https://sibylline.dev/articles/2026-01-28-problems-with-spec...
I'm working on a tool that uses structured specs as a single source of truth for automated documentation and code generation. Think the good parts of SpecKit + Beads + Obsidian (it's actually vault compatible) + Backstage, in a reasonably sized typescript codebase that leverages existing tools. The interface is almost finalized, I'm working on polishing up the CLI/squashing bugs and getting good docs ready to do a real launch but if anyone's curious they can poke around the github in the meantime.
One neat trick I'm leveraging to keep the nice human ergonomics of folders + markdown while enforcing structure and type safety is to have a CUE intermediate representation, then serialize to a folder of markdown files, with all object attributes besides name and description being thrown in front matter. It's the same pattern used by Obsidian vaults, you can even open the vaults it creates in Obsidian if you want.
This structure lets you lint your specs, and do code generation via template pattern matching to automatically generate code + tests + docs from your project vault, so you have one source of truth that's very human accessible.
Jim nailed the core problem. I've been building exactly this "missing layer" for past few months. The challenge isn't just connecting product decisions to code. It's that product context lives in a format that's optimized for human communication, not machine consumption. When engineers feed this to LLMs, they spend massive effort "re-contextualizing" what stakeholders already decided. I built TypMo (https://typmo.com) around two structured formats that serve as this context layer: PTL (Product Thinking Language)- Structures product decisions (personas, objectives, constraints, requirements) in a format both humans can read/edit and LLMs can parse precisely. Think YAML for product thinking. and Interface Structure Language (ISL) - Defines wireframes and component hierarchies in structured syntax that compiles into visual mockups and production-ready prompts. LLMs don't need more context, they need structured context. The workflow Jim describes (stakeholder meeting → manager aggregates → engineer re-contextualizes for LLM) becomes: stakeholder meeting → PTL compilation → IA generation → production prompts.
LEt's see where it goes!
I am curious to know what he has in mind. This 'process engineering' could be a solution to problems that BPM and COBOL are trying to solve. He might end up with another formalized layer (with rules and constraints for everyone to learn) of indirection that integrates better with LLM interactions (which are also evolving rapidly).
I like the idea that 'code is truth' (as opposed to 'correct'). An AI should be able to use this truth and mutate it according to a specification. If the output of an LLM is incorrect, it is unclear whether the specification is incorrect or if the model itself is incapable (training issue, biases). This is something that 'process engineering' simply cannot solve.
We need a language and a transpiler. Honestly the LLM has many uses. Agents have many uses. And we are narrowing down how to make them deterministic and predictable for programming machines and software. But that also means we need something beyond natural language for the actual implementation. Yes we've moved a level up, but engineers are not product managers, so as much as we can define the scope and outline a project like a 2 week sprint using scrum or kanban, the reality is deterministic input for deterministic output is still the way to go. Just as compilers and higher level languages opened the doors to the next phase, the LLM manages this translation and compilation, but it's missing a sort of intermediary language, a format that's going to be much better processed and compiled directly down to machine code. We're talking about LLVM. Why are asking LLMs to write Go code or Python, when we could much better translate an intermediary language to something far more efficient and performant. So I think there's still work to be done.
Am I understanding what you're saying correctly?
* We need a deterministic input language
* The LLM generates machine code
Isn't that just a compiler? Why do we need the LLM at that point?
If the compiler only gets you 80% of the way there, but what it does is sufficient to put the LLM on rails, like programming language mad libs, I'd say that's a win.
> Let's say your organization wants to add "dark mode" to your site. How does that happen? A site-wide feature usually requires several people to hash out the concerns and explore costs vs. benefits. Does the UI theming support dark mode already? Where will users go to toggle dark mode? What should the default be? If we change the background color we will need to swap the font colors. What about borders and dividers? What about images? What about the company blog, and the FAQ area, which look integrated but run on a different frontend? What about that third-party widget with a static white background?
Only one or two of those questions are actually related to programming. (Even though most developers wear multiple hats.) If an organization has the resources to have a six person meeting for adding dark mode, I'd sure hope at least one of them is a designer and knowledgeable on UX. Because most of those questions are ones that they should bring up and have an answer for.
- one of the reasons why i am working on a semi deterministic production grade typescript application generator.
- The lowest layers are the most deterministic and the highest layers are the most vibe coded
- tooling and configuration is at the lowest layers and features are at the highest layer
> but no matter how small you make the steps, the area never changes
Sorry, this is a bit off-topic, but I have to call this out.
The area absolutely does change, you can see this in the trivial example from the first to second step in https://yagmin.com/blog/content/images/2026/02/blocks_cuttin...
The corners are literally cut away.
What doesn't change is the length of the edges, which is a kind of manhattan distance.
The length of the edge has a limit of the straight line, but does not actually approach the limit.
The area however absolutely does approach the limit, as in fact you remove half the "remaining" area each iteration.
Documentation debt happens when docs and code are decoupled. One fix is to make specs stateful artifacts with change detection. In Shadowbook (disclosure: I built it), specs are files with hashes; when a spec changes, linked issues get flagged and can’t be closed until someone acknowledges the drift. That creates a feedback loop between docs and implementation without “vibe documenting.” It doesn’t solve everything, but it makes contradictions visible and forces a review gate when context shifts.
Creating massive amounts of semi-structured data is the missing layer? I can see an argument for that if you're a non-programmer who wants to create something. Although, at some point, it's a form of programming.
As a developer, I would rather just write the code and let AI write the semi-structured data that explains it. Creating reams of flow charts and stories just so an AI can build something properly sounds like hell to me.
> Creating reams of flow charts and stories just so an AI can build something properly sounds like hell to me.
Well yeah, that's why businesses have all those other employees. :)
I'm still trying to understand what this whole thread and blog post are about. Is HN finally seeing the light that AI doesn't replace people? Sure if you're determined enough you can run a business all by yourself, but this was always true. I guess AI can make information more accessible, but so does a search engine, and before that so did books.
Maybe I'm missing something, or we do it differently here, but I think "spec" is defined to narrowly in that article. Start writing the first part of that document in that meeting, and everything ties together neatly.
THE CODE IS THAT LAYER.
If your code does a shit job of capturing the requirements, no amount of markdown will improve your predicament until the code itself is concise enough to be a spec.
Of course you're free to ignore this advice. Lots of the world's code is spaghetti code. You're free to go that direction and reap the reward. Just don't expect to reach any further than mediocrity before your house of cards comes tumbling down, because it turns out "you don't need strong foundations to build tall things anymore" is just abjectly untrue
Upvoted for that animated gif alone. Best visual I've seen of AI coding results.