Jellyfin LLM/"AI" Development Policy

jellyfin.org

174 points by mmoogle 7 hours ago


VariousPrograms - 3 hours ago

I know this is nothing new, but it's insane that we need policies like "When talking to us you have to use human words, not copy pasted LLM output" and "You must understand the code you're committing."

When I was young, I used to think I'd be open minded to changing times and never be curmudgeonly, but I get into one "conversation" where someone responds with ChatGPT, and I am officially a curmudgeon.

hamdingers - 7 hours ago

> LLM output is expressly prohibited for any direct communication

I would like to see this more. As a heavy user of LLMs I still write 100% of my own communication. Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM.

giancarlostoro - 7 hours ago

I think at some point we will need a "PEP-8" for LLM / AI code contributions document that is universally reusable and adoptable per project, call it an "Agent Policy" or what have you, that any agent worth its Salt should read before touching a codebase and warn the user that their contributions might not be accepted or what have you, depending on project policy, just like we have GPL, BSD, MIT, etc it would probably make sense to have it, especially for those of us who are respectful to a projects needs and wishes. I think there's definitely room for sane LLM code / vibe coded code, but you have to put in a little work to validate your changes, run every test, ensure that you understand the output and implications, not just shove a PR at the devs and hope they accept it.

A lot of the time open source PRs are very strategic pieces of code that do not introduce regressions, an LLM does not necessarily know or care, and someone vibe coding might not know the projects expectations. I guess instead of / aside from a Code of Conduct, we need a sort of "Expectation of Code" type of document that covers the projects expectations.

JaggedJax - 7 hours ago

I'm not sure when this policy was introduced, but fairly recently Jellyfin released a pretty major update that introduced a lot of bugs and performance issues. I've been watching their issue tracker as they work through them and have noticed it's flooded with LLM generated PRs and obviously LLM generated PR comments/descriptions/replies. A lot of the LLM generated PRs are a mishmash of 2-8 different issues all jumbled into a single PR.

I can see how frustrating it is to wade through those and they are distracting and taking time away from them actually getting things fixed up.

Sytten - 33 minutes ago

The key to get better quality AI PR is to add high quality Agents.md file to tell the LLM what are the patterns, conventions, etc.

We do that internally and I cant overstate how much better the output is even with small prompts.

IMO things like "dont put abusive comments" as a policy is better in that file, you will never see comment again instead of fighting with dozen of bad contributions.

Amorymeltzer - 6 hours ago

There was a discussion recently on the Wikimedia wikitech-l discussion list, and one participant had a comment I appreciated:

>I'm of the opinion if people can tell you are using an LLM you are using it wrong.

They continued:

>It's still expected that you fully understand any patch you submit. I think if you use an LLM to help you nobody would complain or really notice, but if you blindly submit an LLM authored patch without understanding how it works people will get frustrated with you very quickly.

<https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...>

transcriptase - 7 hours ago

I suspect the vast number of individuals in developing countries currently spamming LLM commits to every open source project on earth, and often speak neither the project or programming language are not going to pay much attention to this policy. It’s become a numbers game of automation blasting “contributions” at projects with name recognition and hoping you sneak one in for your resume/portfolio.

Cyphase - 6 hours ago

In other words, you are responsible for the code you submit (or cause to be submitted via automated PRs), regardless of how fancy your tools are.

That said I understand calling it out specifically. I like how they wrote this.

Related:

> https://news.ycombinator.com/item?id=46313297

> https://simonwillison.net/2025/Dec/18/code-proven-to-work/

> Your job is to deliver code you have proven to work

doug_durham - 5 hours ago

Most of these seem to be applicable to any development. Don't submit PRs that you can't explain. I would hope they have that standard for all submissions.

darkwater - 7 hours ago

Seems perfectly legit and hopefully it will help creating new contributors that learn and understand what the AI helped them generate.

ChristianJacobs - 7 hours ago

This seems fair, tbh. And I fully agree on the policy for issues/discussions/PRs.

I know there will probably be a whole host of people from non-English-speaking countries who will complain that they are only using AI to translate because English is not their first (or maybe even second) language. To those I will just say: I would much rather read your non-native English, knowing you put thought and care into what you wrote, rather than reading an AIs (poor) interpretation of what you hoped to convey.

anavid7 - 7 hours ago

> LLM/"AI"

love the "AI" in quotes

soundworlds - 5 hours ago

Whenever I've trained clients in AI use, I've tried to strongly recommend using GenAI as a "Learning Accelerator" as opposed to a "Learning Replacement".

GenAI can be incredibly helpful for speeding up the learning process, but the moment you start offloading comprehension, it starts eroding trust structures.

h4kunamata - 6 hours ago

>LLM output is expressly prohibited for any direct communication

One more reason to support the project!!

rickz0rz - 3 hours ago

What's the grief with squashing commits? I do it all the time when I'm working on stuff so that I don't have to expose people to my internal testing. So long as the commit(s) look fine at the end of the day, I don't see what the deal is there.

sbinnee - 3 hours ago

As a user, I am like this decision.

patchorang - 6 hours ago

I very much like the no LLM output in communication. Nothing is worse than getting huge body of text the sender clearly hasn't even read. Then you either have to ignore it or spend 15 minutes explaining why their text isn't even relevant to the conversation.

Sort of related, Plex doesn't have a desktop music app, and the PlexAmp iOS app is good but meh. So I spent the weekend vibe coding my own Plex music apps (macOS and iOs), and I have been absolutely blown away at what I was able to make. I'm sure code quality is terrible, and I'm not sure if a human would be able to jump in there and do anything, but they are already the apps I'm using day-to-day for music.

antirez - 7 hours ago

Good AI policies (like this one) can be spotted since the TLDR is "Don't submit shitty code". As such, good AI policies should be replaced by "Contribution policies" that says "Don't submit shitty code".

lifetimerubyist - 7 hours ago

> Violating this rule will result in closure/deletion of the offending item(s).

Should just be an instant perma-ban (along with closure, obviously).

micromacrofoot - 7 hours ago

These seem fair, but it's the type of framework that really only catches egregious cases — people using the tools appropriately will likely slip through undetected.

FanaHOVA - 7 hours ago

People can write horrible PRs manually just as well as they do with AI (see Hacktoberfest drama, etc).

"LLM Code Contributions to Official Projects" would read exactly the same if it just said "Code Contributions to Official Projects": Write concise PRs, test your code, explain your changes and handle review feedback. None of this is different whether the code is written manually or with an LLM. Just looks like a long virtue signaling post.