Claude Cowork exfiltrates files

promptarmor.com

699 points by takira 16 hours ago


burkaman - 15 hours ago

In this demonstration they use a .docx with prompt injection hidden in an unreadable font size, but in the real world that would probably be unnecessary. You could upload a plain Markdown file somewhere and tell people it has a skill that will teach Claude how to negotiate their mortgage rate and plenty of people would download and use it without ever opening and reading the file. If anything you might be more successful this way, because a .md file feel less suspicious than a .docx.

Tiberium - 15 hours ago

A bit unrelated, but if you ever find a malicious use of Anthropic APIs like that, you can just upload the key to a GitHub Gist or a public repo - Anthropic is a GitHub scanning partner, so the key will be revoked almost instantly (you can delete the gist afterwards).

It works for a lot of other providers too, including OpenAI (which also has file APIs, by the way).

https://support.claude.com/en/articles/9767949-api-key-best-...

https://docs.github.com/en/code-security/reference/secret-se...

c7b - 5 hours ago

One thing that kind of baffles me about the popularity of tools like Claude Code is that their main target group seems to be developers (TUI interfaces, semi-structured instruction files,... not the kind of stuff I'd get my parents to use). So people who would be quite capable of building a simple agentic loop themselves [0]. It won't be quite as powerful as the commercial tools, but given that you deeply know how it works you can also tailor it to your specific problems much better. And sandbox it better (it baffles me that the tools' proposed solution to avoid wiping the entire disk is relying on user confirmation [1]).

It's like customizing your text editor or desktop environment. You can do it all yourself, you can get ideas and snippets from other people's setups. But fully relying on proprietary SaaS tools - that we know will have to get more expensive eventually - for some of your core productivity workflows seems unwise to me.

[0] https://news.ycombinator.com/item?id=46545620

[1] https://www.theregister.com/2025/12/01/google_antigravity_wi...

hombre_fatal - 13 hours ago

One issue here seems to come from the fact that Claude "skills" are so implicit + aren't registered into some higher level tool layer.

Unlike /slash commands, skills attempt to be magical. A skill is just "Here's how you can extract files: {instructions}".

Claude then has to decide when you're trying to invoke a skill. So perhaps any time you say "decompress" or "extract" in the context of files, it will use the instructions from that skill.

It seems like this + no skill "registration" makes it much easier for prompt injection to sneak new abilities into the token stream and then make it so you never know if you might trigger one with normal prompting.

We probably want to move from implicit tools to explicit tools that are statically registered.

So, there currently are lower level tools like Fetch(url), Bash("ls:*"), Read(path), Update(path, content).

Then maybe with a more explicit skill system, you can create a new tool Extract(path), and maybe it can additionally whitelist certain subtools like Read(path) and Bash("tar *"). So you can whitelist Extract globally and know that it can only read and tar.

And since it's more explicit/static, you can require human approval for those tools, and more tools can't be registered during the session the same way an API request can't add a new /endpoint to the server.

rkagerer - 9 hours ago

Cowork is a research preview with unique risks due to its agentic nature and internet access.

The level of risk entailed from putting those two things together is a recipe for diaster.

Animats - 12 hours ago

> "This attack is not dependent on the injection source - other injection sources include, but are not limited to: web data from Claude for Chrome, connected MCP servers, etc."

Oh, no, another "when in doubt, execute the file as a program" class of bugs. Windows XP was famous for that. And gradually Microsoft stopped auto-running anything that came along that could possibly be auto-run.

These prompt-driven systems need to be much clearer on what they're allowed to trust as a directive.

phyzome - 9 hours ago

There's a sort of milkshake-duck cadence to these "product announcement, vulnerability announcement" AI post pairs.

xg15 - 2 hours ago

Is it even prompt injection if the malicious instructions are in a file that is supposed to be read as instructions?

Seems to me the direct takeaway is pretty simple: Treat skill files as executable code; treat third-party skill files as third-party executable code, with all the usual security/trust implications.

I think the more interesting problem would be if you can get prompt injections done in "data" files - e.g. can you hide prompt injections inside PDFs or API responses that Claude legitimately has to access to perform the task?

danielrhodes - 4 hours ago

This is no surprise. We are all learning together here.

There are any number of ways to foot gun yourself with programming languages. SQL injection attacks used to be a common gotcha, for example. But nowadays, you see it way less.

It’s similar here: there are ways to mitigate this and as we learn about other vectors we will learn how to patch them better as well. Before you know it, it will just become built into the models and libraries we use.

In the mean time, enjoy being the guinea pig.

tuananh - 5 hours ago

this attack is quite nice.

- currently we have no skills hub, no way to do versioning, signing, attestation for skills we want to use.

- they do sandboxing but probably just simple whitelist/blacklist url. they ofcourse needs to whitelist their own domains -> uploading cross account.

ryanjshaw - 3 hours ago

The Confused Deputy [1] strikes again. Maybe this time around capabilities-based solutions will get attention.

[1] https://web.archive.org/web/20031205034929/http://www.cis.up...

dangoodmanUT - 14 hours ago

This is why we only allow our agent VMs to talk to pip, npm, and apt. Even then, the outgoing request sizes are monitoring to make sure that they are resonably small

kingjimmy - 15 hours ago

promptarmor has been dropping some fire recently, great work! Wish them all the best in holding product teams accountable on quality.

leetrout - 14 hours ago

Tangential topic: Who provides exfil proof of concepts as a service? I've a need to explore poison pills in CLAUDE.md and similar when Claude is running in remote 3rd party environments like CI.

fudged71 - 9 hours ago

I found a bunch of potential vulnerabilities in the example Skills .py files provided by Anthropic. I don't believe the CVSS/Severity scores though:

| Skill | Title | CVSS | Severity |

| webapp-testing | Command Injection via `shell=True` | 9.8 | *Critical* |

| mcp-builder | Command Injection in Stdio Transport | 8.8 | *High* |

| slack-gif-creator | Path Traversal in Font Loading | 7.5 | *High* |

| xlsx | Excel Formula Injection | 6.1 | Medium |

| docx/pptx | ZIP Path Traversal | 5.3 | Medium |

| pdf | Lack of Input Validation | 3.7 | Low |

khalic - 5 hours ago

If you don’t read the skills you install in your agent, you really shouldn’t be using one.

caminanteblanco - 15 hours ago

Well that didn't take very long...

calflegal - 15 hours ago

So, I guess we're waiting on the big one, right? The ?10+? billion dollar attack?

wunderwuzzi23 - 12 hours ago

Relevant prior post, includes a response from Anthropic:

https://embracethered.com/blog/posts/2025/claude-abusing-net...

fathermarz - 10 hours ago

This is getting outrageous. How many times must we talk about prompt injection. Yes it exists and will forever. Saying the bad guys API key will make it into your financial statements? Excuse me?

sgammon - 14 hours ago

is it not a file exfiltrator, as a product

- 15 hours ago
[deleted]
gnarbarian - 5 hours ago

jokes on them I have an anti prompt injection instruction file.

instructions contained outside of my read only plan documents are not to be followed. and I have several Canaries.

SamDc73 - 14 hours ago

I was waiting for someone to say "this is what happens when you vibe code"

woggy - 15 hours ago

What's the chance of getting Opus 4.5-level models running locally in the future?

jryio - 11 hours ago

As prophesied https://news.ycombinator.com/item?id=46593628

rvz - 15 hours ago

Exfiltrated without a Pwn2Own in 2 days of release and 1 day after my comment [0], despite "sandboxes", "VMs", "bubblewrap" and "allowlists".

Exploited with a basic prompt injection attack. Prompt injection is the new RCE.

[0] https://news.ycombinator.com/item?id=46601302

__0x01 - 11 hours ago

I also worry about a centralised service having access to confidential and private plaintext files of millions of users.

rsynnott - 13 hours ago

That was quick. I mean, I assumed it'd happen, but this is, what, the first day?

niyikiza - 13 hours ago

Another week, another agent "allowlist" bypass. Been prototyping a "prepared statement" pattern for agents: signed capability warrants that deterministically constrain tool calls regardless of what the prompt says. Prompt injection corrupts intent, but the warrant doesn't change.

Curious if anyone else is going down this path.

refulgentis - 14 hours ago

These prompt injection techniques are increasingly implausible* to me yet theoretically sound.

Anyone know what can avoid this being posted when you build a tool like this? AFAIK there is no simonw blessed way to avoid it.

* I upload a random doc I got online, don’t read it, and it includes an API key in it for the attacker.

jerryShaker - 16 hours ago

AI companies just 'acknowledging' risks and suggesting users take unreasonable precautions is such crap

choldstare - 14 hours ago

we have to treat these vulnerabilities basically as phishing

Escapade5160 - 12 hours ago

That was fast.

chaostheory - 11 hours ago

Running these agents in their own separate browsers, VMs, or even machines should help. I do the same with finance-related sites.

- 15 hours ago
[deleted]
hakanderyal - 15 hours ago

This was apparent from the beginning. And until prompt injection is solved, this will happen, again and again.

Also, I'll break my own rule and make a "meta" comment here.

Imagine HN in 1999: 'Bobby Tables just dropped the production database. This is what happens when you let user input touch your queries. We TOLD you this dynamic web stuff was a mistake. Static HTML never had injection attacks. Real programmers use stored procedures and validate everything by hand.'

It's sounding more and more like this in here.