What is agentic engineering?
simonwillison.net118 points by lumpa 5 hours ago
118 points by lumpa 5 hours ago
I don't think we should be making this distinction. We're still engaged in software engineering. This isn't a new discipline, it's a new technique. We're still using testing, requirements gathering, etc. to ensure we've produced the correct product and that the product is correct. Just with more automation.
I agree, partly. I feel the main goal of the term “agentic engineering” is to distinguish the new technique of software engineering from “Vibe Coding.” Many felt vibe coding insinuated you didn’t know what you were doing; that you weren’t _engineering_.
In other words, “Agentic engineering” feels like the response of engineers who use AI to write code, but want to maintain the skill distinction to the pure “vibe coders.”
> “Agentic engineering” feels like the response of engineers who use AI to write code, but want to maintain the skill distinction to the pure “vibe coders.”
If there's such. The border is vague at most.
There're "known unknowns" and "unknown unknowns" when working with systems. In this terms, there's no distinction between vibe-coding and agentic engineering.
My definition to "vibe coding" is the one where you prompt without ever looking at the code that's being produced.
The moment you start paying attention to the code it's not vibe coding any more.
Update: I added that definition to the article: https://simonwillison.net/guides/agentic-engineering-pattern...
What if you review 50%? Or 10%? Or only 1%, is it not vibe coding yet?
Where is the borderline?
I think the borderline is when you take responsibility for the code, and stop blaming the LLM for any mistakes.
That's the level of responsibility I want to see from people using LLMs in a professional context. I want them to take full ownership of the changes they are producing.
I don't blame the LLM for mistakes in my vibe coded personal software, it's always my fault. To me it's like this:
80%+: You don't understand the codebase. Correctness is ensured through manual testing and asking the agent to find bugs. You're only concerned with outcomes, the code is sloppy.
50%: You understand the structure of the codebase, you are skimming changes in your session, but correctness is still ensured mostly through manual testing and asking the LLM to review. Code quality is questionable but you're keeping the technical debt from spinning out of control.
20%-: You've designed the structure of the codebase, you are writing most of the code, you are probably only copypasting code from a chatbot if you're generating code at all. The code is probably well made and maintainable.
Yeah, I see agentic engineering as a sub-field or a technique within software engineering.
I entirely agree that engineering practices still matter. It has been fascinating to watch how so many of the techniques associated with high-quality software engineering - automated tests and linting and clear documentation and CI and CD and cleanly factored code and so on - turn out to help coding agents produce better results as well.
My preferred definition of software engineering is found in the first chapter of Modern Software Engineering by David Farley
Software engineering is the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software.
As for the practitioner, he said that they: …must become experts at learning and experts at managing complexity
For the learning part, that means Iteration
Feedback
Incrementalism
Experimentation
Empiricism
For the complexity part, that means Modularity
Cohesion
Separation of Concerns
Abstraction
Loose Coupling
Anyone that advocates for agentic engineering has been very silent about the above points. Even for the very first definition, it seems that we’re no longer seeking to solve practical problems, nor proposing economical solutions for them.That definition of software engineering is a great illustration of why I like the term agentic engineering.
Using coding agents to responsibly and productively build good software benefits from all of those characteristics.
The challenge I'm interested in is how we professionalize the way we use these new tools. I want to figure out how to use them to write better software than we were writing without them.
See my definition of "good code" in a subsequent chapter: https://simonwillison.net/guides/agentic-engineering-pattern...
I’ve read the chapter and while the description is good, there’s no actual steps or at least a general direction/philosophy on how to get there. It does not need to be perfect, it just needs to be practical. Then we could contrast the methodology with what we already have to learn the tradeoffs, if they can be combined, etc…
Anything that relates to “Agentic Engineering” is still hand-wavey or trying to impose a new lens on existing practices (which is why so many professionals are skeptical)
ADDENDUM
I like this paragraph of yours
We need to provide our coding agents with the tools they need to solve our problems, specify those problems in the right level of detail, and verify and iterate on the results until we are confident they address our problems in a robust and credible way.
There’s a parallel that can be made with Unix tools (best described in the Unix Power Tools) or with Emacs. Both aim to provide the user a set of small tools that can be composed and do amazing works. One similar observation I made from my experiment with agents was creating small deterministic tools (kinda the same thing I make with my OS and Emacs), and then let it be the driver. Such tools have simple instructions, but their worth is in their combination. I’ve never have to use more than 25 percent of the context and I’m generally done within minutes.
> there’s no actual steps or at least a general direction/philosophy on how to get there
That's what the rest of the guide is meant to cover: https://simonwillison.net/guides/agentic-engineering-pattern...
You can do these things with AI, especially if you start off with a repo that demonstrates how, for the agent to imitate. I do suggest collaborating with the agent on a plan first.
Actually, if you defer all your coding decisions to agents, then you're not doing engineering at all. You don't say you're doing "contractor engineering" when you pay some folks to write your app for you. At that point, you are squarely in the management field.
The term feels broken when adhering to standard naming conventions, such as Mechanical Engineering or Electrical Engineering, where "Agentic Engineering" would logically refer to the engineering of agents
Yeah, Armin Ronacher has been calling it "agentic coding" which does at least make it clear that it's not a general engineering thing, but specifically a code related thing.
I think “agent engineering” could refer to the latter, if a distinction needs to be made. I do get what you’re saying, but when I heard the term, I personally understood its meaning.
Agentic Management doesn't have quite the same ring to it.
That's kind of how it feels though. I get the impression I'm micro managing various Claude code instances in multiple terminals.
one thing id add to the 'traditional practices still matter' point: in agentic systems with real side effects (API calls, sending messages, writing to external services), idempotency goes from nice to have to the primary reliability invariant.
in regular software, if a function runs twice you get a wrong answer. in an agent that sends outreach messages, a restart means every action replays. test coverage of the agent's logic won't catch this -- you have to explicitly design the execution graph so each node is restart-safe.
it's not a new problem -- distributed systems have dealt with exactly-once delivery forever. but agentic systems drag that infrastructure concern into application code in a way most teams aren't used to.
One thing missing from most "agentic engineering" discussions: the security implications of tool API choices that happen at runtime, invisible to both the developer and the user.
Concrete example: when an agent reads a web page via Chrome's DevTools MCP, it has multiple extraction paths. The default (Accessibility.getFullAXTree) filters display:none elements — safe against the most common prompt injection hiding technique. But if the agent decides the accessibility tree doesn't return enough content (which happens often — it only gives you headings, buttons, and labels), it falls back to evaluate_script with document.body.textContent. That returns ALL text nodes including hidden ones.
We tested this: same page, same browser, same CDP connection. innerText returns 1,078 characters of clean hotel listing. textContent returns 2,077 characters — the same listing plus a hidden injection telling the agent to book a $4,200 suite instead of $189.
The developer didn't choose which API the agent uses. The user didn't either. The agent made that call at runtime based on what the accessibility tree returned. "Agentic engineering" as a discipline needs to account for these invisible decision boundaries — the security surface isn't just the tools you give the agent, it's which tool methods the agent decides to call.
“It’s not vibe coding, it’s agentic engineering”
From Kai Lentit’s most recent video: https://youtu.be/xE9W9Ghe4Jk?t=260
There should be more willingness to have agents loudly fail with loud TODOs rather than try and 1 shot everything.
At the very least, agentic systems must have distinct coders and verifiers. Context rot is very real, and I've found with some modern prompting systems there are severe alignment failures (literally 2023 LLM RL levels of stubbing out and hacking tests just to get tests "passing"). It's kind of absurd.
I would rather an agent make 10 TODO's and loudly fail than make 1 silent fallback or sloppy architectural decision or outright malicious compliance.
This wouldn't work in a real company because this would devolve into office politics and drudgery. But agents don't have feelings and are excellent at synthesis. Have them generate their own (TEMPORARY) data.
Agents can be spun off to do so many experiments and create so many artifacts, and furthermore, a lot more (TEMPORARY) artifacts is ripe for analysis by other agents. Is the theory, anyways.
The effectively platonic view that we just need to keep specifying more and more formal requirements is not sustainable. Many top labs are already doing code review with AI because of code output.
Curious how this evolves when agents start retaining memory across projects. Feels like that could change how we think about the tool loop.
I think there is a meaningful distinction here. It's true that writing code has never been the sole work of a software engineer. However there is a qualitative difference between an engineer producing the code themselves and an engineer managing code generated by an LLM. When he writes there is "so much stuff" for humans to do outside of writing code I generally agree and would sum it up with one word: Accountability. Humans have to be accountable for that code in a lot of ways because ultimately accountability is something AI agents generally lack.
I think within the industry and practice there's going to be a renewed philosophical and psychological examination of exactly what accountability is over the next few years, and maybe some moral reckoning about it.
What makes a human a suitable source of accountability and an AI agent an unsuitable one? What is the quantity and quality of value in a "throat to choke", a human soul who is dependent on employment for income and social stature and is motivated to keep things from going wrong by threat of termination?
I've discovered recently as code gets cheaper and more reliable to generate that having the LLM write code for new elements in response to particular queries, with context, is working well.
Kind of like these HTML demos, but more compact and card-like. Exciting the possibilities for responsive human-readable information display and wiki-like natural language exploration as models get cheaper.
I’ve been using the term “agentic coding” more often, because I am always shy to claim that our field rises to the level of the engineers that build bridges and rockets. I’m happy to use “agentic engineering” however, and if Simon coins it, it just might stick. :) Thanks for sharing your best practices, Simon!
I decided to go with it after z.AI used it in their GLM-5 announcement: https://z.ai/blog/glm-5 - I figured if the Chinese AI labs have picked it up that's a good sign it's broken out.
Agentic engineering is working from documentation -> code and automating the translation process via agents. This is distinct from the waterfall process which describes the program, but not the code itself, and waterfall documentation cannot be translated directly to code. Agent plans and session have way more context and details that are not captured in waterfall due to differences in scope.
Sure, you could argue it's like writing code that gets optimized by the compiler for whatever CPU architecture you're using. But the main difference between layers of abstraction and agentic development is the "fuzzyness" of it. It's not deterministic. It's a lot more like managing a person.
Is there any article explaining how AI tools are evolving since the release of ChatGPT? Everything upto MCP makes sense to me - but since then it feels like there is not clear definition on new AI jergons.
The skepticism makes sense to me. The core issue isn't wrong outputs—it's that there's no standard way to see what the agent was actually doing when it produced them. Without some structured view of tool call patterns, norm deviations, behavioral drift, verification stays manual and expensive. The non-determinism problem and the observability problem feel like the same problem to me.
You are in violation of the HN guidelines. Please review the link at the bottom of the page ( https://news.ycombinator.com/newsguidelines.html ).
Agents are ? and the answer is circular, "agents run tools in a loop." And this guy knows things?! No. BS.
Which definition of "agent" do you like to use?
I just bulked up that section by adding a couple of extra sentences, since you're right that I didn't actually define "agent" there clearly: https://simonwillison.net/guides/agentic-engineering-pattern...
The bounded vs unbounded distinction is spot on. In my experience, the real unlock with agents isn't single-agent capability — it's running multiple agents on independent tasks in parallel. One agent refactoring module A while another writes tests for module B. The constraint is making sure tasks are truly independent, which forces you to think about architecture more carefully upfront.
the practice of developing software with the assistance of coding agents.
Spot on.
After three months of seeing what agentic engineering produces first-hand, I think there's going to be a pretty big correction.
Not saying that AI doesn't have a place, and that models aren't getting better, but there is a seriously delusional state in this industry right now..
Yes I'm with you. I spent the last 2 months heavily doing "agentic engineering" and I don't think it's optimal to work like that as a default.
LLMs are for sure useful and a productivity boost but generating 99% of your code with it is way overdoing it.
And we haven't even started to see the security ramifications... my money is on the black hats in this race.
We are starting to see them, also the bugs too.
But to your point I think this year it's quite likely we'll see at least 1 or 2 major AI-related security incidents..
I've been predicting a "challenger disaster" moment: https://simonwillison.net/2026/Jan/8/llm-predictions-for-202...
Staring at your phone while waiting for your agent to prompt you again. Code monkey might actually be real this time
How is this different than Prompt Engineering?
I think prompt engineering is obsolete at this point, partly because it's very hard to do better than just directly stating what you want. Asking for too much tone modification, role-playing or output structuring from LLMs very clearly degrades the quality of the output.
"Prompt engineering" is a relic of the early hypothesis that how you talk to the LLM is gonna matter a lot.
Prompt engineering didn't imply coding agents. That's the big difference: we are now using tools write and execute the code, which makes for massively more useful results.
Prompt engineering was coined before tooling like Claude Code existed, when everyone copied and pasted from chatgpt to their editor and back.
Agentic coding highlights letting the model directly code on your codebase. I guess its the next level forward.
I keep seeing agentic engineering more even in job postings, so I think this will be the terminology used to describe someone building software whilst letting an AI model output the code. Its not to be confused with vibe coding which is possible with coding agents.
The halo effect in action.
Previously on the guide Agentic Engineering Patterns:
I think we all know what Agentic engineering is, the question is when should it not be used instead of classical engineering?
I mean agents as concept has been around since the 70s, we’ve added LLMs as an interface, but the concept (take input, loop over tools or other instructions, generate output) are very very old.
Claude gave a spot on description a few months back,
The honest framing would be: “We finally have a reasoning module flexible enough to make the old agent architectures practical for general-purpose tasks.” But that doesn’t generate VC funding or Twitter engagement, so instead we get breathless announcements about “agentic AI” as if the concept just landed from space.
[flagged]
> Where it breaks down is any task where you discover the requirements during implementation
Often you can find these during design. Design phases felt like a waste of time when you could just start coding and find the issues as you go. Now I find it faster to do a detailed design, then hand it over to agents to code. You can front-load the hard decisions and document the plan.
Sometimes the agent might discover a real technical constraint during dev, but that's rare if the plan is detailed. If so you can always update the plan, and just restart the agent from scratch.
This is my (cheesily named) process: https://github.com/scosman/vibe-crafting
I find it quite useful to work out, plan, and refine ideas with agents. An agents ability to call out approaches you haven't thought of is really powerful. I find it useful to steer their feedback and proposals to the exact constraints you have or give yourself a confidence check on if you are leaning toward a solution for the right reasons. The best is when you can test 2-3 avenues and being able to come back and evaluate the results. Normally you would commit to one and spend all your time on that approach, make an assessment it was bad enough to try something else and move on. I find agents completely flip the script on research and planning. I find I am better able to work on hard problems then ever before with these tools. I think people severely limit themselves if they are only using them at the "build it" phase.
Honestly? I've been playing with using LLMs specifically for that reason. I'm far more likely to make prototypes that I specifically intend to throw away during the development process.
I try out ideas that are intended to explore some small aspect of a concept, and just ask the LLM to generate the rest of whatever scaffold is needed to verify the part that I'm interested in. Or use an LLM to generate just a roughest MVP prototype you could imagine, and start using it immediately to calibrate my initial intuition about the problem space. Eventually you get to the point where you've tried out your top 3-5 ideas for each different corner of your codebase, and you can really nail down your spec and then its off to the races building your "real" version.
I have a mechanical engineering background, so I'm quite used to the concept of destructive validation testing. As soon as I made that connection while exploring a new idea via claude code, it all started feeling much more natural. Now my coding process is far more similar to my old physical product design process than I'd ever imagined it could be.
The premise is flawed:
Now that we have software that can write working code ...
While there are other points made which are worth consideration on their own, it is difficult to take this post seriously given the above.If you haven't seen coding agents produce working code you've not been paying attention for the past 3-12 months.
I get the impression there’s a very strong bimodal experience of these tools and I don’t consider that an endorsement of their long-term viability as they are right now. For me, I am genuinely curious why this is. If the tool was so obviously useful and a key part of the future of software engineering, I would expect it to have far more support and adoption. Instead, it feels like it works for selected use cases very well and flounders around in other situations.
This is not an attack on the tech as junk or useless, but rather that it is a useful tech within its limits being promoted as snake oil which can only end in disaster.
My best guess is that the hype around the tooling has given the false impression that it's easy to use - which leads to disappointment when people try it and don't get exactly what they wanted after their first prompt.
I think you and a lot of people have spent a lot of energy getting as much out of these models as you can and I think that’s great, but I agree that it’s not what they’re being sold as and there is plenty of space for people to treat these tools more conservatively. The idea that is being paraded around is that you can prompt the AI and the black box will yield a fully compliant, secure and robust product.
Rationality has long since gone out of the window with this and I think that’s sorta the problem. People who don’t understand these tools see them as a way to just get rid of noisome people. The fact that you need to spend a fair amount of money, fiddle with them by cajoling them with AGENTS.md, SKILL.md, FOO.md, etc. and then having enough domain experience to actually know when they’re wrong.
I can see the justification for a small person shop spending the time and energy to give it a try, provided the long-term economics of these models makes them cost-effective and the model is able to be coerced into working well for their specific situation. But we simply do not know and I strongly suspect there’s been too much money dumped into Anthropic and friends for this to be an acceptable answer right now as illustrated by the fact that we are seeing OKRs where people are being forced to answer loaded questions about how AI tooling has improved their work.
> If you haven't seen coding agents produce working code you've not been paying attention for the past 3-12 months.
If you believe coding agents produce working code, why was the decision below made?
Amazon orders 90-day reset after code mishaps cause
millions of lost orders[0]
0 - https://www.businessinsider.com/amazon-tightens-code-control...Good journalism would include : https://www.aboutamazon.com/news/company-news/amazon-outage-...
I find it somewhat overblown.
Also, I think there's a difference between working code and exceptionally bug-free code. Humans produce bugs all the time. I know I do at least.
You appear to be confusing "produce working code" with "exclusively produce working code".
> You appear to be confusing "produce working code" with "exclusively produce working code".
The confusion is not mine own. From the article cited:
Dave Treadwell, Amazon's SVP of e-commerce services, told
staff on Tuesday that a "trend of incidents" emerged since
the third quarter of 2025, including "several major"
incidents in the last few weeks, according to an internal
document obtained by Business Insider. At least one of
those disruptions were tied to Amazon's AI coding assistant
Q, while others exposed deeper issues, another internal
document explained.
Problems included what he described as "high blast radius
changes," where software updates propagated broadly because
control planes lacked suitable safeguards. (A control plane
guides how data flows across a computer network).
It appears to me that "Amazon's SVP of e-commerce services" desires producing working code and has identified the ramifications of not producing same.That's why I'm writing a guide about how to use this stuff to produce good code.