Show HN: A MitM proxy to see what your LLM tools are sending

github.com

141 points by jmuncor 11 hours ago


I built this out of curiosity about what Claude Code was actually sending to the API. Turns out, watching your tokens tick up in real-time is oddly satisfying.

Sherlock sits between your LLM tools and the API, showing you every request with a live dashboard, and auto-saved copies of every prompt as markdown and json.

catlifeonmars - an hour ago

This tool looks like it unconditionally disables tls verification for upstream requests.

It shells out to mitmproxy with "--set", "ssl_insecure=true"

This took all of 5 minutes to find reading through main.py on my phone.

https://github.com/jmuncor/sherlock/blob/fb76605fabbda351828...

Edit: In case it’s not clear, you should not use this.

ctippett - 5 hours ago

As someone who just set up mitmproxy to do something very similar, I wish this would've been a plugin/add-on instead of a standalone thing.

I know and trust mitmproxy. I'm warier and less likely to use a new, unknown tool that has such broad security/privacy implications. Especially these days with so many vibe-coded projects being released (no idea if that's the case here, but it's a concern I have nonetheless).

EMM_386 - 8 hours ago

This is great.

When I work with AI on large, tricky code bases I try to do a collaboration where it hands off things to me that may result in large number of tokens (excess tool calls, unprecise searches, verbose output, reading large files without a range specified, etc.).

This will help narrow down exactly which to still handle manually to best keep within token budgets.

Note: "yourusername" in install git clone instructions should be replaced.

Havoc - 7 hours ago

You don't need to mess with certificates - you can point CC at a HTTP endpoint and it'll happily play along.

If you build a DIY proxy you can also mess with the prompt on the wire. Cut out portions of the system prompt etc. Or redirect it to a different endpoint based on specific conditions etc.

asyncadventure - an hour ago

This is incredibly useful for understanding the black box of LLM API calls. The real-time token tracking is game-changing for debugging why certain prompts are so expensive and optimizing context window usage. Having the markdown/JSON exports of every request makes it trivial to iterate on prompt engineering.

david_shaw - 8 hours ago

Nice work! I'm sure the data gleaned here is illuminating for many users.

I'm surprised that there isn't a stronger demand for enterprise-wide tools like this. Yes, there are a few solutions, but when you contrast the new standard of "give everyone at the company agentic AI capabilities" with the prior paradigm of strong data governance (at least at larger orgs), it's a stark difference.

I think we're not far from the pendulum swinging back a bit. Not just because AI can't be used for everything, but because the governance on widespread AI use (without severely limiting what tools can actually do) is a difficult and ongoing problem.

maxkfranz - an hour ago

Could you use an approach like this much like a traditional network proxy, to block or sanitise some requests?

E.g. if a request contains confidential information (whatever you define that to be), then block it?

vitorbaptistaa - 3 hours ago

That looks great! Any plans on allowing exports to OpenTelemetry apps like Arize Phoenix? I am looking for ways to connect my Claude Code using Max plan (no API) to it and the best I found was https://arize.com/blog/claude-code-observability-and-tracing..., but it seems kinda overweight.

zahlman - an hour ago

Or we could just demand agents that offer this level of introspection?

winchester6788 - 3 hours ago

interesting that you chose to go the MITM way.

https://github.com/quilrai/LLMWatcher

here is my take on the same thing, but as a mac app and using BASE_URL for intercepting codex, claude code and hooks for cursor.

daxfohl - 4 hours ago

Pretty slick. I've been wanting something like this that gets stored with a hash that is stored in the corresponding code change commit message. It'd be good for postmortems of unnoticed hallucinations, and might even be useful to "revive" the agent and see if it can help debug the problem it created.

the_arun - 5 hours ago

I understand this helps if we have our own LLM run time. What if we use external services like ChatGPT / Gemini (LLM Providers)? Shouldn't they provide this feature to all their clients out of the box?

FEELmyAGI - 7 hours ago

Dang how will Tailscale make any money on its latest vibe coded feature [0] when others can vibe code it themselves? I guess your SaaS really is someones weekend vibe prompt.

[0]https://news.ycombinator.com/item?id=46782091

mrbluecoat - 8 hours ago

So is it just a wrapper around MitM Proxy?

elphard - 6 hours ago

This is fantastic. Claude doesn't make it easy to inspect what it's sending - which would actually be really useful for refining the project-specific prompts.

alickkk - 8 hours ago

Nice work! Do i need to update Claude Code config after start this proxy service?

someguy101010 - 4 hours ago

Does this support bedrock?

- 8 hours ago
[deleted]
andrewstuart - 7 hours ago

What about SSL/certificates ?

lifetimerubyist - 3 hours ago

lmao WTAF is this?

build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/build/lib/sherlock

- 11 hours ago
[deleted]