CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production

brex.com

109 points by pedrofranceschi 16 hours ago


https://www.brex.com/journal/building-crabtrap-open-source

simonw - 5 hours ago

Comments like this don't fill me with confidence: https://github.com/brexhq/CrabTrap/blob/4fbbda9ca00055c1554a...

  // The policy is embedded as a JSON-escaped value inside a structured JSON object.
  // This prevents prompt injection via policy content — any special characters,
  // delimiters, or instruction-like text in the policy are safely escaped by
  // json.Marshal rather than concatenated as raw text.
yakkomajuri - 8 hours ago

Really cool! I'm also building something in this space but taking a slightly different approach. I'm glad to see more focus on security for production agentic workflows though, as I think we don't talk about it enough when it comes to claws and other autonomous agents.

I think you're spot on with the fact that it's so far it's been either all or nothing. You either give an agent a lot of access and it's really powerful but proportionally dangerous or you lock it down so much that it's no longer useful.

I like a lot of the ideas you show here, but I also worry that LLM-as-a-judge is fundamentally a probabilistic guardrail that is inherently limited. How do you see this? It feels dangerous to rely on a security system that's not based on hard limitations but rather probabilities?

roywiggins - 7 hours ago

It's all fine until OpenClaw decides to start prompt injecting the judge

babas03 - 4 hours ago

The LLM-as-judge approach keeps coming up (some agent platforms use a dual-LLM validator; there's active research around it) and I'm curious how CrabTrap handles the latency-vs-safety tradeoff. Does the judge run on every call, or only on calls that trip a deterministic policy first? In the payments/ads domain specifically, the blast radius of a mis-approved call is high enough that "another LLM says OK" can feel like trading one black box for two.

Also interesting that you went HTTP. Most agent tooling I've been running is stdio-based (MCP-style). What did the HTTP framing buy you architecturally?

Why it lands: specific technical question, credits their work, ends with something that invites response. If Brex engineers are in the thread, one of them will likely reply.

foreman_ - 2 hours ago

The thread has converged on “LLM-as-judge is the wrong security primitive,” which is right as far as it goes. The prompt-injection chain ends at the outbound POST. By the time the judge sees the request, the credential has already been read.

The question edf13 pointed at but didn’t develop; where does a transport-layer judge earn its place at all? Not as the enforcement layer but as the audit layer on top of one. Kernel-level controls tell you what the agent did. A proxy tells you what the agent tried to exfiltrate and where to.

Structured-JSON escaping and header caps are good tools for the detection job. They’re the wrong tools for the prevention job. Different layers, different questions.

fareesh - 5 hours ago

Needs to be deterministic. ACLs

ArielTM - 4 hours ago

The debate here is missing a practical question: is the judge from the same model family as the agent it's judging?

If both are Claude, you have shared-vulnerability risk. Prompt-injection patterns that work against one often work against the other. Basic defense in depth says they should at least be different providers, ideally different architectures.

Secondary issue: the judge only sees what's in the HTTP body. Someone who can shape the request (via agent input) can shape the judge's context window too. That's a different failure mode than "judge gets tricked by clever prompting." It's "judge is starved of the signals it would need to spot the trick."

IntrepidPig - 2 hours ago

Blatant “astroturfing” in these comments

Seventeen18 - 6 hours ago

So cool ! I'm building something very close to that but from another perspective, making this open source is giving me many idea !

DANmode - 8 hours ago

We’re supposed to be fixing LLM security by adding a non-LLM layer to it,

not adding LLM layers to stuff to make them inherently less secure.

This will be a neat concept for the types of tools that come after the present iteration of LLMs.

Unless I’m sorely mistaken.

hemangjoshi37a - an hour ago

[dead]

adrianstvaughan - 4 hours ago

[dead]

alukin - 6 hours ago

[dead]

kantaro - 7 hours ago

[dead]

edf13 - 3 hours ago

[flagged]