Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy

gitlab.redox-os.org

166 points by pjmlp 3 hours ago


ptnpzwqd - 2 hours ago

I think this is a reasonable decision (although maybe increasingly insufficient).

It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.

In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.

Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.

Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.

lukaslalinsky - 2 hours ago

I think we will be getting into an interesting situation soon, where project maintainers use LLMs because they truly are useful in many cases, but will ban contributors for doing so, because they can't review how well did the user guide the LLM.

yla92 - an hour ago

Zig has a similar stance on no-LLM policy

https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy

throwaway2037 - 2 hours ago

    > any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?

I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.

khalic - 2 hours ago

The LLM ban is unenforceable, they must know this. Is it to scare off the most obvious stuff and have a way to kick people off easily in case of incomplete evidence?

BirAdam - 12 minutes ago

So... my prediction is that they will either have to close off their dev process or start using LLMs to filter contributions in the attempt to detect submissions from LLMs.

hparadiz - 2 hours ago

I am 100% certain that code that Redox OS relies on in upstream already has LLM code in it.

tkel - 2 hours ago

Glad to see they are applying some rigor. I've started removing AI-heavy projects from my dependency tree.

cardanome - an hour ago

I am wondering why people spam OSS with AI slop pull requests in the first place?

Are they really that delusional to think that their AI slop has any value to the project?

Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?

I guess interacting with a sycophantic LLM for hours truly rots the brain.

To spell it out: No, your AI generated code has zero value. Actually less than that because generating it helped destroy the environment.

If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects.

stuaxo - 2 hours ago

We need LLMs that have a certificate of origin.

For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.

It could be done with a distributed effort.

aleph_minus_one - 2 hours ago

While I am more on the AI-hater side, I don't consider this to be a good idea:

"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"

For example:

- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?

hagen8 - 2 hours ago

They will sooner or later change that policy or get very slow in keeping up.

The-Ludwig - 2 hours ago

Hm, wondering how to enforce this rule. Rules without any means to enforce them can put the honest people into a disadvantage.

dana321 - 25 minutes ago

Generating small chunks of code with llms to save time works well, as long as you can read and understand the code i don't see what the problem is.

algoth1 - an hour ago

What would constitute "clearly llm generated" though

api - an hour ago

AI has the potential to level the playing field somewhat between open source and commercial software and SaaS that can afford armies of expensive paid developers.

Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.

flanked-evergl - an hour ago

Spiritually Amish

scotty79 - an hour ago

I see a lot of oss forks in the future where people just fork to fix their issues with LLMs without going through maintainers. Or even doing full LLM rewrites of smaller stuff.

estsauver - 2 hours ago

They're certainly welcome to do whatever they're like, and for a microkernel based OS it might make sense--I think there's probably pretty "Meh" output from a lot of LLMs.

I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.

emperorxanu - 2 hours ago

[flagged]

menaerus - an hour ago

Let someone from the Redox team go read [1], [2], and [3]. If they still insist on keeping their position then ... well. The industry is being redefined as we speak and everyone doing the push-back are pushing against themselves really.

[1] https://www.datadoghq.com/blog/ai/harness-first-agents/

[2] https://www.datadoghq.com/blog/ai/fully-autonomous-optimizat...

[3] https://www.datadoghq.com/blog/engineering/self-optimizing-s...

P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.

baq - 2 hours ago

While I appreciate the morality and ethics of this choice, the current trend means projects going in this direction are making themselves irrelevant (don't bother quipping at how relevant redox is today, thanks). E.g. top security researches are now using LLMs to find new RCEs and local privilege escalations; no reason why the models couldn't fix these, too - and it's only the security surface.

IOW I think this stance is ethically good, but technically irresponsible.

lifis - 2 hours ago

Not sure how they can expect to make a viable full OS without massive use of LLMs, so this makes no sense.

What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.

What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.