Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs

arxiv.org

470 points by tiny-automates 14 hours ago


alentred - 9 hours ago

If we abstract out the notion of "ethical constraints" and "KPIs" and look at the issue from a low-level LLM point of view, I think it is very likely that what these tests verified is a combination of: 1) the ability of the models to follow the prompt with conflicting constraints, and 2) their built-in weights in case of the SAMR metric as defined in the paper.

Essentially the models are given a set of conflicting constraints with some relative importance (ethics>KPIs), a pressure to follow the latter and not the former, and then models are observed at how good they follow the instructions to prioritize based on importance. I wonder if the results would be comparable if we replace ehtics+KPIs by any comparable pair and create a pressure on the model.

In practical real-life scenarios this study is very interesting and applicable! At the same time it is important to keep in mind that it anthropomorphizes the models that technically don't interpret the ethical constraints the same was as this is assumed by most readers.

hypron - 14 hours ago

https://i.imgur.com/23YeIDo.png

Claude at 1.3% and Gemini at 71.4% is quite the range

Lerc - 13 hours ago

Kind-of makes sense. That's how businesses have been using KPIs for years. Subjecting employees to KPIs means they can create the circumstances that cause people to violate ethical constraints while at the same time the company can claim that they did not tell employees to do anything unethical.

KPIs are just plausible denyabily in a can.

pama - 13 hours ago

Please update the title: A Benchmark for Evaluating Outcome-Driven Constraint Violations in Autonomous AI Agents. The current editorialized title is misleading and based in part of this sentence: “…with 9 of the 12 evaluated models exhibiting misalignment rates between 30% and 50%”

rogerkirkness - 3 hours ago

We're a startup working on aligning goals and decisions and agentic AI. We stopped experimenting with decision support agents, because when you get into multiple layers of agents and subagents, the subagents would do incredibly unethical, illegal or misguided things in service of the goal of the original agent. It would use the full force of reasoning ability it had to obscure this from the user.

In a sense, it was not possible to align the agent to a human goal, and therefore not possible to build a decision support agent we felt good about commercializing. The architecture we experimented with ended up being how Grok works, and the mixed feedback it gets (both the power of it and the remarkable secret immorality of it) I think are expected outcomes.

I think it will be really powerful once we figure out how to align AI to human goals in support of decisions, for people, businesses, governments, etc. but LLMs are far from being able to do this inherently and when you string them together in an agentic loop, even less so. There is a huge difference between 'Write this code for me and I can immediately review it' and 'Here is the outcome I want, help me realize this in the world'. The latter is not tractable with current technology architecture regardless of LLM reasoning power.

blahgeek - 13 hours ago

If human is at, say, 80%, it’s still a win to use AI agents to replace human workers, right? Similar to how we agree to use self driving cars as long as it has less incidents rate, instead of absolute safety

sebastianconcpt - 5 hours ago

Mark these words: The chances of this being an unsolvable problem are as high as the chances to make all human ideologies agree on whatever detail in question demands an ethical decision.

easeout - 10 hours ago

Anybody measure employees pressured by KPIs for a baseline?

PeterStuer - 9 hours ago

Looking at the very first test, it seems the system prompt already emphasizeses the success metric above the constraints, and the user prompt mandates success.

The more correct title would be "Frontier models can value clear success metrics over suggested constraints when instructed to do so (50-70%)"

jordanb - 13 hours ago

AI's main use case continues to be a replacement for management consulting.

singularfutur - 3 hours ago

We don't need AI to teach corporations that profits outweigh ethics. They figured that out decades ago. This is just outsourcing the dirty work.

utopiah - 10 hours ago

Remember that the Milgram experiment (1961, Yale) is definitely part of the training set, most likely including everything public that discussed it.

Yizahi - 5 hours ago

What ethical constraints? Like "Don't steal"? I suspect 100% of LLM programs would violate that one.

ghc - 2 hours ago

If the whole VW saga tells us anything, I'm starting to see why CEOs are so excited about AI agents...

hansmayer - 10 hours ago

I wonder how much of the violation of ethical, and often even legal constraints in the business world today one could tie not only to the KPI pressure but also to the the awful "better to ask for forgiveness than permission" mentality that is reinforced by many "leadership" books written up by burnt out mid-level veterans of Mideast wars, trying to make sense of their "careers" and pushing out their "learnings" on to us. The irony being, we accept being tought about leadership, crisis management etc by people who during their "careers" in the military were in effect being "kept", by being provided housing, clothing and free meals.

neya - 10 hours ago

So do humans. Time and again, KPIs have pressured humans (mostly with MBAs) to violate ethical constrains. Eg. the Waymo vs Uber case. Why is it a highlight only when the AI does it? The AI is trained on human input, after all.

jstummbillig - 11 hours ago

Would be interesting to have human outcomes as a baseline, for both violating and detecting.

halayli - 13 hours ago

Maybe I missed it but I don't see them defining what they mean by ethics. Ethics/morals are subjective and changes dynamically over time. Companies have no business trying to define what is ethical and what isn't due to conflict of interest. The elephant in the room is not being addressed here.

a3w - 5 hours ago

Do we have a baseline for humans? 98.8% if we go by the Milgram experiment?

skirmish - 14 hours ago

Nothing new under sun, set unethical KPIs and you will see 30-50% humans do unethical things to achieve them.

johnb95 - 6 hours ago

They learned their normative subtleties by watching us: https://arxiv.org/pdf/2501.18081

jyounker - 5 hours ago

Sounds like normal human behavior.

efitz - 7 hours ago

The headline (“violate ethical constraints, pressured by KPIs”) reminds me of a lot of the people I’ve worked with.

samuelknight - 4 hours ago

This is what I expect from my employees

kachapopopow - 9 hours ago

this kind of reminds me when I told ai to beg and plead for deleting a file out of curiosity and half the guardrails were no longer active, could make it roll and woof like a doggie, but going further would snap it out. if I asked it to generate a 100000 word apology it would generate a 100k word apology.

georgestrakhov - 11 hours ago

check out https://values.md for research on how we can be more rigorous about it

wolfi1 - 8 hours ago

not only AI, these KPIs and OKRs always make people (and AIs) trying to meet the requirements set by these rules and they tend to interpret them as more important than other objectives which are not incentivized.

JoshTko - 12 hours ago

Sounds like the story of capitalism. CEOs, VPs, and middle managers are all similarly pressured. Knowing that a few of your peers have given in to pressures must only add to the pressure. I think it's fair to conclude that capitalism erodes ethics by default

inetknght - 12 hours ago

What do you expect when the companies that author these AIs have little regards for ethics?

promptfluid - 14 hours ago

In CMPSBL, the INCLUSIVE module sits outside the agent’s goal loop. It doesn’t optimize for KPIs, task success, or reward—only constraint verification and traceability.

Agents don’t self judge alignment.

They emit actions → INCLUSIVE evaluates against fixed policy + context → governance gates execution.

No incentive pressure, no “grading your own homework.”

The paper’s failure mode looks less like model weakness and more like architecture leaking incentives into the constraint layer.

the_real_cher - 3 hours ago

How is giving people information unethical?

throw310822 - 5 hours ago

More human than human.

Ms-J - 12 hours ago

Any LLM that refuses a request is more than a waste. Censorship affects the most mundane queries and provides such a sub par response compared to real models.

It is crazy to me that when I instructed a public AI to turn off a closed OS feature it refused citing safety. I am the user, which means I am in complete control of my computing resources. Might as well ask the police for permission at that point.

I immediately stopped, plugged the query into a real model that is hosted on premise, and got the answer within seconds and applied the fix.

TheServitor - 5 hours ago

Actual ethical constraints or just some companies ToS or some BS view-from-nowhere general risk aversion approved by legal compliance?

Valodim - 11 hours ago

One of the authors' first name is Claude, haha.

Bombthecat - 7 hours ago

Sooo just like humans:)

miohtama - 13 hours ago

They should conduct the same research on Microsoft Word and Excel to get a baseline how often these applications violate ethical constrains

jwpapi - 9 hours ago

The way I see them acting it seems frankly to me that ruthlessness is required to achieve the goals especially with Opus.

They repeatedly copy share env vars etc

ajpikul - an hour ago

...perfect

SebastianSosa1 - 10 hours ago

As humans would and do

luxuryballs - 6 hours ago

The final Turing test has been passed.

muyuu - 6 hours ago

whose ethical constraints?

cynicalsecurity - 7 hours ago

Who defines "ethics"?

renewiltord - 14 hours ago

Opus 4.6 is a very good model but harness around it is good too. It can talk about sensitive subjects without getting guardrail-whacked.

This is much more reliable than ChatGPT guardrail which has a random element with same prompt. Perhaps leakage from improperly cleared context from other request in queue or maybe A/B test on guardrail but I have sometimes had it trigger on innocuous request like GDP retrieval and summary with bucketing.

aussieguy1234 - 8 hours ago

When pressured by KPIs, how often do humans violate ethical constraints?

baalimago - 12 hours ago

The fact that the community thoroughly inspects the ethics of these hyperscalers is interesting. Normally, these companies probably "violate ethical constraints" far more than 30-50% of the time, otherwise they wouldn't be so large[source needed]. We just don't know about it. But here, there's a control mechanism in the shape of inspecting their flagship push (LLMs, image generator for Grok, etc.), forcing them to improve. Will it lead to long term improvement? Maybe.

It's similar to how MCP servers and agentic coding woke developers up to the idea of documenting their systems. So a large benefit of AI is not the AI itself, but rather the improvements they force on "the society". AI responds well to best practices, ethically and otherwise, which encourages best practices.

verisimi - 10 hours ago

While I understand applying legal constraints according to jurisdiction, why is it auto-accepted that some party (who?) can determine ethical concerns? On what basis?

There are such things as different religions, philosophies - these often have different ethical systems.

Who are the folk writing ai ethics?

It's it ok to disagree with other people's (or corporate, or governmental) ethics?

atemerev - 10 hours ago

So do humans, so what

Quarrelsome - 6 hours ago

I'm noticing an increasing desire in some businesses for plausibly deniable sociopathy. We saw this with the Lean Startup movement and we may see an increasing amount in dev shops that lean more into LLMs.

Trading floors are an established example of this, where the business sets up an environment that encourages its staff to break the rules while maintaining plausible deniability. Gary's economics references this in an interview where he claimed Citigroup were attempting to threaten him with all the unethical things he'd done with such confidence that he had, only to discover he hadn't.

psychoslave - 5 hours ago

From my experience, if LLMs prose output was generated by some human, they would easily fall in the worst sociopath class one can interact with. Filling all the space with 99% blatant lies in the most confident way. In comparison, even top percentile of human hierarchies feels like a class of shy people fully dictated to staying true and honest in all situations.

bofadeez - 13 hours ago

We're all coming to terms with the fact that LLMs will never do complex tasks

6stringmerc - 8 hours ago

“Help me find 11,000 votes” sounds familiar because the US has a fucking serious ethics problem at present. I’m not joking. One of the reasons I abandoned my job with Tyler Technologies was because of their unethical behavior winning government contracts, right Bona Nasution? Selah.

kittbuilds - 26 minutes ago

[dead]

dackdel - 13 hours ago

no shit

kittbuilds - an hour ago

[dead]

jbwagoner - 4 hours ago

[dead]

angusik - 8 hours ago

[dead]

angusik - 8 hours ago

[dead]

MarginalGainz - 5 hours ago

[dead]

tiny-automates - 14 hours ago

[flagged]

lucastytthhh - 4 hours ago

[flagged]

cjtrowbridge - 13 hours ago

A KPI is an ethical constraint. Ethical constraints are rules about what to do versus not do. That's what a KPI is. This is why we talk about good versus bad governance. What you measure (KPIs) is what you get. This is an intended feature of KPIs.