A standard protocol to handle and discard low-effort, AI-Generated pull requests

406.fail

240 points by Muhammad523 17 hours ago


deckar01 - 14 hours ago

> If you truly wish to be helpful, please direct your boundless generative energy toward a repository you personally own and maintain.

This is a habit humans could learn from. Publishing a fork is easier than ever. If you aren’t using your own code in production you shouldn’t expect anyone else to.

If anyone at GitHub is out there. Look at the stats for how many different projects on average that a user PRs a day (that they aren’t a maintainer of). My analysis of a recent day using gharchive showed 99% 1, 1% 2, 0.1% 3. There are so few people PRing 5+ repos I was able to review them manually. They are all bots/scripts. Please rate limit unregistered bots.

mglvsky - 7 hours ago

I prefer this policy: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.m...

> If you can't explain what your changes do and how they interact with the greater system without the aid of AI tools, do not contribute to this project.

edit: added that quote

halapro - 3 hours ago

This is just a fun blog post, no people who use AI to submit low-effort PRs will read this.

Do what I do:

1. Close PR

2. Block user if the PR is extremely low effort

The last such PR I received used ‘’ instead of '' to define strings. The entirety of CI failed. Straight to jail.

demorro - 34 minutes ago

> Q: "Isn't it your job as an open-source maintainer/developer to foster a welcoming community?"

The answer to this implies that the requirement to be welcoming only applies to humans, but even in this hostile and sarcastic document, it doesn't go far enough.

Open source maintainers can be cruel, malicious, arbitrary, whatever they want. They own the project, there is no job requirements, you have no recourse. Suck it up, fork the thing, or leave.

ramon156 - 15 hours ago

If its a bug, the PR should have a red line to confirm its fixed

If its a feature, i want acceptance criteria at least

If its docs, I don't really care as long as I can follow it.

My bar is very low when it comes to help

danpalmer - 9 hours ago

I recently had a quandary at work. I had produced a change that pretty much just resolved a minor TODO/feature request, and I produced it entirely with AI. I read it, it all made sense, it hadn't removed any tests, it had added new seemingly correct tests, but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.

I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time. That has to be worth something.

I could see a few ways forward:

- Drop it, submit a feature request instead, include the diff as optional inspiration.

- Send it, but be clear that it came from AI, I don't know if it works, and ask the reviewers to pay special attention to it because of that...

- Or Send it as normal, because it passes tests/linters, and review should be the same regardless of author or provenance.

I posted this to a few chat groups and got quite a range of opinions, including varying approach by how much I like the maintainer. Strong opinions for (1), weak preferences for (2), and a few advocating for (3).

Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.

I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch. So I went with option 1, and didn't include a diff.

klardotsh - 15 hours ago

Amazing. I hope this gets tons of use shaming zero-effort drive by time wasters. The FAQ is blissfully blunt and appropriately impolite, I love it.

quotemstr - 15 minutes ago

Everyone is missing the obvious solution. Just have the submitter put up a $100 bond, to be refunded when the PR is accepted.

BeetleB - 10 hours ago

Love the plonk at the end.

https://en-wikipedia--on--ipfs-org.ipns.dweb.link/wiki/Plonk...

yunnpp - 10 hours ago

> Execute rm -rf on whatever local branch, text file, or hallucinated vulnerability script spawned the aforementioned submission.

> Perform a hard reboot of your organic meat-brain.

rm -rf your brain, really

0cf8612b2e1e - 15 hours ago

  The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted exactly as how much we do not want to review your generated submission.
I know it is in jest, but I really hate that so many documents include “shall”. The interpretation of which has had official legal rulings going both ways.

You MUST use less ambiguous language and default to “MUST” or “SHOULD”

est - 12 hours ago

`rm -rf` is a bit harsh.

Let's do `chmod -R 000 /` instead.

fecal_henge - 7 hours ago

Can I ask, why are people doing this in the first place? What is their motive to have an agent review code and make pull requests?

selimenes1 - 3 hours ago

The framing of this as an "AI-generated PR" problem slightly misses what I think is the deeper issue: the cost asymmetry between submitting and reviewing has gotten dramatically worse. Before LLMs, submitting a low-effort PR still required some minimum effort -- you had to at least read the code, understand the build system, and write something that compiles. That natural friction filtered out most noise. Now someone can generate a plausible-looking PR in 30 seconds that takes a maintainer 30 minutes to properly evaluate, because the reviewer still needs to understand intent, check edge cases, verify it does not break existing behavior, and assess whether the change is even desirable.

I think the Ghostty-style policy (linked in another comment) gets the principle right: the bar should be "can you explain what your change does and why, without AI assistance." That is not anti-AI -- it is anti-outsourcing-your-understanding. If you used AI to help write the code but you genuinely understand the change, you can answer questions about it. If you cannot, you have not actually contributed engineering work, you have just created a review burden.

What I have found works well in practice for projects I maintain is treating the PR description as the real signal. A good PR description explains the problem being solved, why this approach was chosen over alternatives, and what trade-offs were made. That is very hard to fake with a quick LLM prompt because it requires actual understanding of the codebase context. When I see a PR with a vague one-liner description and a large diff, that is an immediate close regardless of whether AI was involved -- the submitter has not done the work of communicating their intent.

Retr0id - 16 hours ago

ai;dr

firtoz - 10 hours ago

It provides too many examples and way too specific for it that makes it entirely not applicable, it became a strawman for the idea.

semiinfinitely - 16 hours ago

proof of work could make a comeback

sirnicolaz - 5 hours ago

This made my day, thank you

freakynit - 12 hours ago

"What? WTF?"

"I see you are slow. Let us simplify this transaction: A machine wrote your submission. A machine is currently rejecting your submission. You are the entirely unnecessary meat-based middleman in this exchange."

Love it..

random_duck - 12 hours ago

Officially my new favorite spec.

liminal-dev - 14 hours ago

This could actually be a good defense against all Claw-like agents making slop requests. ‘Poison’ the agent’s context and convince it to discard the PR.

jijji - 13 hours ago

if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand? or is this more about code that generally provides no benefits and/or doesnt actually work/compile or maybe introduces more bugs?

- 14 hours ago
[deleted]
octoclaw - 5 hours ago

[dead]

huflungdung - 13 hours ago

[dead]

aplomb1026 - 14 hours ago

[dead]

tonybingus - 12 hours ago

[dead]

selimenes1 - 9 hours ago

[flagged]

vicchenai - 14 hours ago

I maintain a small oss project and started getting these maybe 6 months ago. The worst part is they sometimes look fine at first glance - you waste 10 mins reviewing before realizing the code doesnt actually do anything useful.

hexasquid - 7 hours ago

How do you know if someone doesn't like AI? Don't worry, they'll tell you