Show HN: Axe – A 12MB binary that replaces your AI framework

github.com

223 points by jrswab 4 days ago


I built Axe because I got tired of every AI tool trying to be a chatbot.

Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.

Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.

What Axe is:

- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)

- Stdin piping, something like `git diff | axe run reviewer` just works

- Sub-agent delegation. Where agents call other agents via tool use, depth-limited

- Persistent memory. If you want, agents can remember across runs without you managing state

- MCP support. Axe can connect any MCP server to your agents

- Built-in tools. Such as web_search and url_fetch out of the box

- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format

- Path-sandboxed file ops. Keeps agents locked to a working directory

Written in Go. No daemon, no GUI.

What would you automate first?

bensyverson - 4 days ago

It's exciting to see so much experimentation when it comes to form factors for agent orchestration!

The first question that comes to mind is: how do you think about cost control? Putting a ton in a giant context window is expensive, but unintentionally fanning out 10 agents with a slightly smaller context window is even more expensive. The answer might be "well, don't do that," and that certainly maps to the UNIX analogy, where you're given powerful and possibly destructive tools, and it's up to you to construct the workflow carefully. But I'm curious how you would approach budget when using Axe.

CraigJPerry - 3 days ago

I've had good success with something along these lines but perhaps a bit more raw:

    - claude takes a -p option
    - i have a bunch of tiny scripts, each script is an agent but it only does one tiny task
    - scripts can be composed in a unix pipeline
For example:

    $ git diff --staged | ai-commit-msg | git commit -F -
Where ai-commit-msg is a tiny agent:

    #!/usr/bin/env bash
    # ai-commit-msg: stdin=git diff, stdout=conventional commit message
    # Usage: git diff --staged | ai-commit-msg
    set -euo pipefail
    source "${AGENTS_DIR:-$HOME/.agents}/lib/agent-lib.sh"
    
    SYSTEM=$(load_skills \
        core/unix-output.md \
        core/be-concise.md \
        domain/git.md \
        output/plain-text.md)
    
    SYSTEM+=$'\n\nTask: Given a git diff on stdin, output a single conventional commit message. One line only.'
    
    run_agent "$SYSTEM"
And you can see to keep the agents themselves tiny, they rely on a little lib to load the various skills and optionally apply some guard / post-exec validator. Those validators are usually simple grep or whatever to make sure there were no writes outside a given dir but sometimes they can be to enforce output correctness (always jq in my examples so far...). In theory the guard could be another claude -p call if i needed a semantic instruction.
athrowaway3z - 3 days ago

I'm not sure if HN is being flooded with bots or if the majority of people here nowadays lack a sense of simplicity.

Anybody looking to do interesting things should instantly ignore any project that mention "persistent memory". It speaks of scope creep or complexity obfuscation.

If a tool wants to include "persistent memory" it needs to write the 3 sentence explanation of how their scratch/notes files are piped around and what it achieves.

Not just claim "persistent memory".

I might even go so far that any project using the terminology "memory" is itself doomed to spend too much time & tokens building scaffolding for abstractions that dont work.

Multicomp - 3 days ago

This is what I've been trying to get nanobot to do, so thanks for sharing this. I plan to use this for workflow definitions like filesystems.

I have a known workflow to create an RPG character with steps, lets automate some of the boilerplate by having a succession of LLMs read my preferences about each step and apply their particular pieces of data to that step of the workflow, outputting their result to successive subdirectories, so I can pub/sub the entire process and make edits to intermediate files to tweak results as I desire.

Now that's cool!

mccoyb - 3 days ago

Cool work!

Aside but 12 MB is ... large ... for such a thing. For reference, an entire HTTP (including crypto, TLS) stack with LLM API calls in Zig would net you a binary ~400 KB on ReleaseSmall (statically linked).

You can implement an entire language, compiler, and a VM in another 500 KB (or less!)

I don't think 12 MB is an impressive badge here?

aa-jv - 3 days ago

This is exactly what I have wanted for a while, so thank you very much!

Disclaimer: I haven't dug into axe enough yet, just going on first impressions.

>No daemon, no GUI.

I love the world we developers live in right now. ;)

>What would you automate first?

In a sense, I have wanted to be able to just add AI to a repo, and treat it like the junior developer it is. Its okay if the junior developer will do literally any stupid thing I tell it to do, because I won't tell it to do stupid things.

So, exactly: refactor this code, implement a shim, produce docs for <blah>, construct a build harness, write unit tests, produce a build, diff these codebases, implement this API, do all this on your own branch, and build and test things so that I can review the PR over coffee.

Essentially, three word commands which will encourage the AI to produce better software. Through my repo, so I can just review through the repo.

Okay, that's how I hope things work, now off to actually dig in to axe and give it a try on a few things, thanks very much again ..

reacharavindh - 3 days ago

Reminded me of this from my bookmarks.

https://github.com/chr15m/runprompt

armcat - 4 days ago

Great work! Kind of reminds me of ell (https://github.com/MadcowD/ell), which had this concept of treating prompts as small individual programs and you can pipe them together. Not sure if that particular tool is being maintained anymore, but your Axe tool caters to that audience of small short-lived composable AI agents.

hamandcheese - 4 days ago

> Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out.

I'm a bit skeptical of this approach, at least for building general purpose coding agents. If the agents were humans, it would be absolutely insane to assign such fine-grained responsibilities to multiple people and ask them to collaborate.

bsoles - 3 days ago

I don't know exactly how these things work, but you may run into copyright/TM issues with Deque's Axe tool: https://www.deque.com/axe/devtools/

kwstx - 2 days ago

Really cool approach, I like the “Unix philosophy” for agents. Curious how you handle state persistence and chaining sub-agents when agents are depth-limited. Also, do you have any strategies for ensuring data consistency across runs, especially when multiple agents interact with the same files?

ColonelPhantom - 3 days ago

I like the idea of LLM-calling as an automation-friendly CLI tool! However, putting all my agents in ~/.config feels antithetical to this. My Bash scripts do not live there either, but rather in a separate script collection, or preferably, at their place of use (e.g. in a repo).

For example, let's say I want to add commit message generation (which I don't think is a great use of LLMs, but it is a practical example) to a repo. I would add the appropriate hook to /.git, but I would also want the agent with its instructions to live inside the repo (perhaps in an `axe` or `agents` directory).

Can Axe load agents from the current folder? Or can that be added?

swaminarayan - 3 days ago

Axe treats LLM agents like Unix programs—small, composable, version-controllable. Are we finally doing AI the Unix way?

multidude - 3 days ago

A problem i have is that the agent's mental model of the system im building diverges from reality over time. After discussing that many times and asking it to remember, it becomes frustrating. In the README you say the agents memory persists across runs, would that solve said problem?

Also, I had to do several refactorings of my agent's constructs and found out that one of them was reinventing stuff producing a plethora of function duplications: e.g. DB connection pools(i had at least four of them simultaneously).

Would AXE require shared state between chained agents? Could it do it if required?

Orchestrion - 4 days ago

The Unix-style framing resonates a lot.

One thing I’ve noticed when experimenting with agent pipelines is that the “single-purpose agent” model tends to make both cost control and reasoning easier. Each agent only gets the context it actually needs, which keeps prompts small and behavior easier to predict.

Where it gets interesting is when the pipeline starts producing artifacts instead of just text — reports, logs, generated files, etc. At that point the workflow starts looking less like a chat session and more like a series of composable steps producing intermediate outputs.

That’s where the Unix analogy feels particularly strong: small tools, small contexts, and explicit data flowing between steps.

Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text.

punkpeye - 4 days ago

What are some things you've automated using Axe?

uhx - 3 days ago

> - Path-sandboxed file ops. Keeps agents locked to a working directory

How is it supposed to work, if agent can simply run "cat" command instead of using skill for file read/write/etc?

btbuildem - 4 days ago

I really like seeing the movement away from MCP across the various projects. Here the composition of the new with the old (the ol' unix composability) seems to um very nicely.

OP, what have you used this on in practice, with success?

boznz - 3 days ago

I will give it a try, I like the idea of being closer to the metal.

A Proper self-contained, self improving AI@home with the AI as the OS is my end goal, I have a nice high spec but older laptop I am currently using as a sacrificial pawn experimenting with this, but there is a big gap in my knowledge and I'm still working through GPT2 level stuff, also resources are tight when you're retired. I guess someone will get there this year the way things are going, but I'm happy to have fun until then.

rellfy - 3 days ago

This is a great concept. I fully agree with small, focused and composable design. I've been exploring a similar direction at asterai.io but focusing more on the tool layer than agent layer, with portable WASM components you write once in any language and compose together.

I currently use Claude web with an MCP component for my workflows but axe looks like it could be a nicer and quicker way to work with the tools I have.

snadal - 3 days ago

Nice! I’ll try this soon, and I’m afraid I’ll end up using it a lot.

@jrswab, do you think it would be feasible to limit outgoing connections to a whitelist of domains, URLs, or IP addresses?

I’d like to automate some of my email, calendar, or timesheet tasks, but I’m concerned that a prompt injection could end up exfiltrating or deleting data. In fact, that’s the main reason why I’m not using Openclaw or similar projects with real data yet.

paymenthunter01 - 3 days ago

Nice approach treating LLM agents like Unix programs. The TOML config per agent is clean. I've been working on something in a similar vein for invoice processing — small focused agents that do one thing well. Curious how you handle retries when an upstream LLM provider has intermittent failures mid-pipeline?

anotherevan - 3 days ago

I really like this idea. Gonna need an "Awesome Axe" page that collects agents.

One idea I'm thinking of is, after an agent has been in use for a while, and built up and understanding of the task, would be something like, "Write a Python script to replace this agent."

I could imagine this would work with agents that are processing log files or other semi-structured data for example.

sameergh - 3 days ago

Interesting approach to slimming down the framework layer. One thing I've been thinking about as agents get lighter and faster the attack surface for prompt injection and behavioral drift grows. Are you thinking about any security primitives at this layer?

bmurphy1976 - 3 days ago

This is interesting. I'd be curious to see a bunch more working examples. Personally I like the chat model because I iterate heavily on planning specs and have a lot of back and forth before implementation.

I could see using this once the plan is defined and switching back to chat while iterating on post-implementation cleanup and refactoring.

zahlman - 3 days ago

> 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)

Does it do anything CPU-bound on its own, such that it benefits significantly from being a compiled (Go) executable? I actually like having things like this done in Python, since there's more potential to hack around with them.

CuriouslyC - 3 days ago

Why not just run your typical claude/codex/pi/etc with a prompt as the command line/input?

- 3 days ago
[deleted]
mark_l_watson - 4 days ago

If I have time I want to try this today because it matches my LLM-based work style, especially when I am using local models: I have command line tools that help me generated large one-shot prompts that I just paste into an Ollama repl - then I check back in a while.

It looks like Axe works the same way: fire off a request and later look at the results.

creehappus - 3 days ago

I really like the project, although I would prefer a json5 config, not toml, which I find annoying to reason about.

0xbadcafebee - 4 days ago

Nice. There's another one also written in Go (https://github.com/tbckr/sgpt), but i'll try this one too. I love that open source creates multiple solutions and you can choose the one that fits you best

hmokiguess - 3 days ago

looks really cool, how does it differ from something like running claude headless with `claude -p`?

saberience - 4 days ago

I’m having trouble understanding when/where I would use this? Is this a replacement for pi or codex?

stpedgwdgfhgdd - 3 days ago

“ MCP support. Axe can connect any MCP server to your agents”

I just don't see this in the readme… It is not in the Features section at least.

Anyway, i have MCP server that can post inline comments into Gitlab MR. Would like to try to hook it up to the code reviewer.

TSiege - 4 days ago

This looks really interesting. I'm curious to learn more about security around this project. There's a small section, but I wonder if there's more to be aware of like prompt injection

uchibeke - 3 days ago

Ok. this is interesting. How're you handling guardrails or the agent going rouge or doing something unintended?

dumbfounder - 4 days ago

Now what we need is a chat interface to develop these config files.

jedbrooke - 4 days ago

looks interesting, I agree that chat is not always the right interface for agents, and a LLM boosted cli sometimes feels like the right paradigm (especially for dev related tasks).

how would you say this compares to similar tools like google’s dotprompt? https://google.github.io/dotprompt/getting-started/

nthypes - 4 days ago

There is no "session" concept?

eikenberry - 3 days ago

Does it support the use of other OpenAI API compatible services like Openrouter?

a1o - 4 days ago

Is the axe drawing actually a hammer?

let_rec - 4 days ago

Is there Gemini support?

koakuma-chan - 3 days ago

https://github.com/jrswab/axe/blob/master/internal/refusal/r...

Lmao

zrail - 4 days ago

Looks pretty interesting!

Tiny note: there's a typo in your repo description.

testingtrade - 3 days ago

amazing work my friend

- 4 days ago
[deleted]
agenticbtcio - 2 days ago

[dead]

BrianFHearn - 2 days ago

[flagged]

spranab - 4 hours ago

[dead]

longtermemory - 3 days ago

[dead]

rockmanzheng - 3 days ago

[dead]

ashersopro - 2 days ago

I LOVE IT

ashersopro - 2 days ago

[flagged]

ashersopro - 2 days ago

[flagged]

tianrking - 3 days ago

[flagged]

ozgurozkan - 4 days ago

[flagged]

ufish235 - 4 days ago

Why is this comment an ad?

Lliora - 4 days ago

12MB for an "AI framework replacement"? That's either brilliant compression or someone's redefining "framework" to mean "toy model that works on my laptop." Show me the benchmarks on actual workloads, not the readme poetry.