AGENTS.md outperforms skills in our agent evals

vercel.com

273 points by maximedupre 17 hours ago


motoboi - an hour ago

Models are not AGI. They are text generators forced to generate text in a way useful to trigger a harness that will produce effects, like editing files or calling tools.

So the model won’t “understand” that you have a skill and use it. The generation of the text that would trigger the skill usage is made via Reinforcement Learning with human generated examples and usage traces.

So why don’t the model use skills all the time? Because it’s a new thing, there is not enough training samples displaying that behavior.

They also cannot enforce that via RL because skills use human language, which is ambiguous and not formal. Force it to use skills always via RL policy and you’ll make the model dumber.

So, right now, we are generating usage traces that will be used to train the future models to get a better grasp of when to use skills not. Just give it time.

AGENTS.md, on the other hand, is context. Models have been trained to follow context since the dawn of the thing.

tottenhm - 9 hours ago

> In 56% of eval cases, the skill was never invoked. The agent had access to the documentation but didn't use it.

The agent passes the Turing test...

w10-1 - 7 hours ago

The key finding is that "compression" of doc pointers works.

It's barely readable to humans, but directly and efficiently relevant to LLM's (direct reference -> referent, without language verbiage).

This suggests some (compressed) index format that is always loaded into context will replace heuristics around agents.md/claude.md/skills.md.

So I would bet this year we get some normalization of both the indexes and the referenced documentation (esp. matching terms).

Possibly also a side issue: API's could repurpose their test suites as validation to compare LLM performance of code tasks.

LLM's create huge adoption waves. Libraries/API's will have to learn to surf them or be limited to usage by humans.

jgbuddy - 8 hours ago

Am I missing something here?

Obviously directly including context in something like a system prompt will put it in context 100% of the time. You could just as easily take all of an agent's skills, feed it to the agent (in a system prompt, or similar) and it will follow the instructions more reliably.

However, at a certain point you have to use skills, because including it in the context every time is wasteful, or not possible. this is the same reason anthropic is doing advanced tool use ref: https://www.anthropic.com/engineering/advanced-tool-use, because there's not enough context to straight up include everything.

It's all a context / price trade off, obviously if you have the context budget just include what you can directly (in this case, compressing into a AGENTS.md)

chr15m - 7 hours ago

I'm not sure if this is widely known but you can do a lot better even than AGENTS.md.

Create a folder called .context and symlink anything in there that is relevant to the project. For example READMEs and important docs from dependencies you're using. Then configure your tool to always read .context into context, just like it does for AGENTS.md.

This ensures the LLM has all the information it needs right in context from the get go. Much better performance, cheaper, and less mistakes.

thorum - 8 hours ago

The article presents AGENTS.md as something distinct from Skills, but it is actually a simplified instance of the same concept. Their AGENTS.md approach tells the AI where to find instructions for performing a task. That’s a Skill.

I expect the benefit is from better Skill design, specifically, minimizing the number of steps and decisions between the AI’s starting state and the correct information. Fewer transitions -> fewer chances for error to compound.

devonkelley - 2 hours ago

Interesting discussion, but I think this focuses too much on the "did the agent have the right context?" question and not enough on "did the execution path actually work?"

We've found that even with optimal context loading - whether that's AGENTS.md, skills, or whatever - you still get wild variance in outcomes. Same task, same context, different day, different results. The model's having a bad afternoon. The tool API is slow. Rate limits bite you. Something in the prompt format changed upstream.

The context problem is solvable with engineering. The reliability problem requires treating your agent like a distributed system: canary paths, automatic failover, continuous health checks. Most of the effort in production agents isn't "how do I give it the right info?" It's "how do I handle when things work 85% of the time instead of 95%?"

denolfe - 4 hours ago

PreSession Hook from obra/superpowers injects this along with more logic for getting rid of rationalizing out of using skills:

> If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill. IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.

While this may result in overzealous activation of skills, I've found that if I have a skill related, I _want_ to use it. It has worked well for me.

verdverm - 7 hours ago

This largely mirrors my experience building my custom agent

1. Start from the Claude Code extracted instructions, they have many things like this in there. Their knowledge share in docs and blog on this aspect are bar none

2. Use AGENTS.md as a table of contents and sparknotes, put them everywhere, load them automatically

3. Have topical markdown files / skills

4. Make great tools, this is still opaque in my mind to explain, lots of overlap with MCP and skills, conceptually they are the same to me

5. Iterate, experiment, do weird things, and have fun!

I changed read/write_file to put contents in the state and presented in the system prompt, same for the agents.md, now working on evals to show how much better this is, because anecdotally, it kicks ass

carterschonwald - 13 minutes ago

static linking va dynamic but we dont know the actual config and setup. and also the choice of totally changes the problem

jryan49 - 9 hours ago

Something that I always wonder with each blog post comparing different types of prompt engineering is did they run it once, or multiple times? LLMs are not consistent for the same task. I imagine they realize this of course, but I never get enough details of the testing methodology.

msp26 - an hour ago

This doesn't surprise me.

I have a SKILL.md for marimo notebooks with instructions in the frontmatter to always read it before working with marimo files. But half the time Claude Code still doesn't invoke it even with me mentioning marimo in the first conversation turn.

I've resorted to typing "read marimo skill" manually and that works fine. Technically you can use skills with slash commands but that automatically sends off the message too which just wastes time.

But the actual concept of instructions to load in certain scenarios is very good and has been worth the time to write up the skill.

armcat - 3 hours ago

Firstly this is great work from Vercel - I am especially impressed with the evals setup (evals are the most undervalued component in any project IMO). Secondly the result is not surprising and I’ve seen consistently the increase in performance when you always include an index (or in my case, Table of Contents as a json structure) in your system prompt. Applying this outside of coding agents (like classic document retrieval) also works very well!

bandrami - 2 hours ago

Blackbox oracles make bad workflows, and tend to produce a whole lot of cargo culting. It's this kind of opacity (why does the markdown outperform agents? there's no real way to find out, even with a fully open or house model because the nature of the beast is that the execution path in a model can't be predicted) that makes me shy away from saying LLMs are "just another tool". If I can't see inside it -- and if even the vendor can't really see inside of it -- there's something fundamentally different.

whinvik - an hour ago

When we were trying to build our own agents we put quite a bit of effort on evals which was useful.

But switching over to using coding agents we never did the same. Feels like building an eval set will be an important part of what engg orgs do going forward.

BenoitEssiambre - 7 hours ago

Wouldn't this have been more readable with a \n newline instead of a pipe operator as a seperator? This wouldn't have made the prompt longer.

wakeless - 2 hours ago

I did a similar set of evals myself utilising the baseline capabilities that Phoenix (elixir) ships with and then skillified them.

Regularly the skills were not being loaded and thus not utilised. The outputs themselves were fine. This suggested that at some stage through the improvements of the models that baseline AGENTS.md had become redundant.

someguyiguess - 3 hours ago

The problem is that Agents.md is only read on initial load. Once context grows too large the agent will not reload the md file and loses / forgets the info from Agents.md.

thevinter - 7 hours ago

I'm a bit confused by their claims. Or maybe I'm misunderstanding how Skills should work. But from what I know (and the small experience I had with them), skills are meant to be specifications for niche and well defined areas of work (i.e. building the project, running custom pipelines etc.)

If your goal is to always give a permanent knowledge base to your agent that's exactly what AGENTS.md is for...

holocen - 3 hours ago

Prompted and built a bit of an extension of skills.sh with https://passivecontext.dev it basically just takes the skill and creates that "compressed" index. Still have to install the skill and all that, but might give others a bit of a short cut to experiment with.

meatcar - 7 hours ago

What if instead of needing to run a codemod to cache per-lib docs locally, documentation could be distributed alongside a given lib, as a dev dependency, version locked, and accessible locally as plaintext. All docs can be linked in node_modules/.docs (like binaries are in .bin). It would be a sort of collection of manuals.

What a wonderful world that would be.

underlines - 3 hours ago

Oh got, this scales bad and bloats your context window!

Just create an MCP server that does embedding retrieval or agentic retrieval with a sub agent on your framework docs.

Finally add an instruction to AGENT.md to look up stuff using that MCP.

gpm - 4 hours ago

Compressing information in AGENTS.md makes a ton of sense, but why are they measuring their context in bytes and not tokens!?

minimal_action - 4 hours ago

It's very interesting but presenting success rates without any measure of the error, or at least inline details about the number of iterations is unprofessional. Especially for small differences or when you found the "same" performance.

shinhyeok - an hour ago

But aren't you guys released skills.sh?

jascha_eng - 4 hours ago

This does not normalize for tokens used if their skill description was as large as the docs index and contained all the reasons the LLM might want to use the skill, it likely performs much better than just one sentence as well.

pietz - 9 hours ago

Isn't it obvious that an agent will do better if he internalizes the knowledge on something instead of having the option to request it?

Skills are new. Models haven't been trained on them yet. Give it 2 months.

newzino - 8 hours ago

The compressed agents.md approach is interesting, but the comparison misses a key variable: what happens when the agent needs to do something outside the scope of its instructions?

With explicit skills, you can add new capabilities modularly - drop in a new skill file and the agent can use it. With a compressed blob, every extension requires regenerating the entire instruction set, which creates a versioning problem.

The real question is about failure modes. A skill-based system fails gracefully when a skill is missing - the agent knows it can't do X. A compressed system might hallucinate capabilities it doesn't actually have because the boundary between "things I can do" and "things I can't" is implicit in the training rather than explicit in the architecture.

Both approaches optimize for different things. Compressed optimizes for coherent behavior within a narrow scope. Skills optimize for extensibility and explicit capability boundaries. The right choice depends on whether you're building a specialist or a platform.

songodongo - 7 hours ago

> When it needs specific information, it reads the relevant file from the .next-docs/ directory.

I guess you need to make sure your file paths are self-explanatory and fairly unique, otherwise the agent might bring extra documentation into the context trying to find which file had what it needed?

smcleod - 8 hours ago

Sounds like they've been using skills incorrectly if they're finding their agents don't invoke the skills. I have Claude Code agents calling my skills frequently, almost every session. You need to make sure your skill descriptions are well defined and describe when to use them and that your tasks / goals clearly set out requirements that align with the available skills.

onnimonni - 4 hours ago

Would someone know if their eval tests are open source and where I could find them? Seems useful for iterating on Claude Code behaviour.

- 8 hours ago
[deleted]
ares623 - 9 hours ago

2 months later: "Anthropic introduces 'Claude Instincts'"

AndyNemmity - 4 hours ago

My experience agrees with this.

Which is why I use a skill that is a command, that routes requests to agents and skills.

rao-v - 9 hours ago

In a month or three we’ll have the sensible approach, which is smaller cheaper fast models optimized for looking at a query and identifying which skills / context to provide in full to the main model.

It’s really silly to waste big model tokens on throat clearing steps

keeganpoppen - 6 hours ago

i dont know why, but this just feels like the most shallow “i compare llms based on the specs” kind of analysis you can get… it has extreme “we couldn’t get the llm to intuit what we wanted to do, so we assumed that it was a problem with the llm and we overengineered a way to make better prompts completely by accident” energy…

sheepscreek - 8 hours ago

It seems their tests rely on Claude alone. It’s not safe to assume that Codex or Gemini will behave the same way as Claude. I use all three and each has its own idiosyncrasies.

EnPissant - 9 hours ago

This is confusing.

TFA says they added an index to Agents.md that told the agent where to find all documentation and that was a big improvement.

The part I don't understand is that this is exactly how I thought skills work. The short descriptions are given to the model up-front and then it can request the full documentation as it wants. With skills this is called "Progressive disclosure".

Maybe they used more effective short descriptions in the AGENTS.md than they did in their skills?

tanishqkanc - 4 hours ago

this is only gonna be an issue until the next gen models where the labs will aggressively post train the models to proactively call skills

- 4 hours ago
[deleted]
sothatsit - 8 hours ago

This seems like an issue that will be fixed in newer model releases that are better trained to use skills.

hahahahhaah - 5 hours ago

Next.js sure makes a good benchmark for AI capability (and for clarity... this is not a compliment).

meeech - 7 hours ago

question: anyone recognize that eval UI or is it something they made in-house?

heliumtera - 7 hours ago

you are telling me that a markdown saying:

*You are the Super Duper Database Master Administrator of the Galaxy*

does not improve the model ability reason about databases?

CjHuber - 8 hours ago

That feels like a stupid article. well of course if you have one single thing you want to optimize putting it into AGENTS.md is better. but the advantage of skills is exactly that you don't cram them all into the AGENTS file. Let's say you had 3 different elaborate things you want the agent to do. good luck putting them all in your AGENTS.md and later hoping that the agent remembers any of it. After all the key advantage of the SKILLs is that they get loaded to the end of the context when needed

ChrisArchitect - 7 hours ago

Title is: AGENTS.md outperforms skills in our agent evals

thom - 8 hours ago

You need the model to interpret documentation as policy you care about (in which case it will pay attention) rather than as something it can look up if it doesn’t know something (which it will never admit). It helps to really internalise the personality of LLMs as wildly overconfident but utterly obsequious.

smrtinsert - 6 hours ago

Are people running into mismatched code vs project a lot? I've worked on python and java codebases with claude code and have yet to run into a version mismatch issue. I think maybe once it got confused on the api available in python, but it fixed it by itself. From other blog posts similar to this it would seem to be a widespread problem, but I have yet to see it as a big problem as part of my day job or personal projects.

delduca - 8 hours ago

Ah nice… vercel is vibecoded

- 8 hours ago
[deleted]