A standard protocol to handle and discard low-effort, AI-Generated pull requests
406.fail240 points by Muhammad523 17 hours ago
240 points by Muhammad523 17 hours ago
> If you truly wish to be helpful, please direct your boundless generative energy toward a repository you personally own and maintain.
This is a habit humans could learn from. Publishing a fork is easier than ever. If you aren’t using your own code in production you shouldn’t expect anyone else to.
If anyone at GitHub is out there. Look at the stats for how many different projects on average that a user PRs a day (that they aren’t a maintainer of). My analysis of a recent day using gharchive showed 99% 1, 1% 2, 0.1% 3. There are so few people PRing 5+ repos I was able to review them manually. They are all bots/scripts. Please rate limit unregistered bots.
I prefer this policy: https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.m...
> If you can't explain what your changes do and how they interact with the greater system without the aid of AI tools, do not contribute to this project.
edit: added that quote
Good idea, though I'm not sure how to enforce it. You can ask an AI for that and then rewrite it in your own words.
This is just a fun blog post, no people who use AI to submit low-effort PRs will read this.
Do what I do:
1. Close PR
2. Block user if the PR is extremely low effort
The last such PR I received used ‘’ instead of '' to define strings. The entirety of CI failed. Straight to jail.
> Q: "Isn't it your job as an open-source maintainer/developer to foster a welcoming community?"
The answer to this implies that the requirement to be welcoming only applies to humans, but even in this hostile and sarcastic document, it doesn't go far enough.
Open source maintainers can be cruel, malicious, arbitrary, whatever they want. They own the project, there is no job requirements, you have no recourse. Suck it up, fork the thing, or leave.
If its a bug, the PR should have a red line to confirm its fixed
If its a feature, i want acceptance criteria at least
If its docs, I don't really care as long as I can follow it.
My bar is very low when it comes to help
I recently had a quandary at work. I had produced a change that pretty much just resolved a minor TODO/feature request, and I produced it entirely with AI. I read it, it all made sense, it hadn't removed any tests, it had added new seemingly correct tests, but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time. That has to be worth something.
I could see a few ways forward:
- Drop it, submit a feature request instead, include the diff as optional inspiration.
- Send it, but be clear that it came from AI, I don't know if it works, and ask the reviewers to pay special attention to it because of that...
- Or Send it as normal, because it passes tests/linters, and review should be the same regardless of author or provenance.
I posted this to a few chat groups and got quite a range of opinions, including varying approach by how much I like the maintainer. Strong opinions for (1), weak preferences for (2), and a few advocating for (3).
Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch. So I went with option 1, and didn't include a diff.
Here’s what you could do if you somehow found yourself with an LLM-generated change to a codebase implementing a feature you want, and you wanted to do the most do expedite the implementation of that feature without disrespecting and alienating maintainers:
1. Go through all changes, understand what changed and how it solves the problem.
2. Armed with that understanding, write (by hand) a high-level summary of what can be done (and why) to implement your feature.
3. Write a regular feature request, and include that summary in it (as an appendix).
Not long ago I found myself on the receiving end of a couple of LLM-generated PRs and partly LLM-generated issue descriptions with purported solutions. Both were a bit of a waste of time.
The worst about the PRs is when you cannot engage in a good-faith, succint and quick “why” sort of discussion with the submitter as you are going through changes. Also, when PR fails to notice a large-scale pre-existing pattern I would want to follow to reduce mental overhead and instead writes something completely new, I have to discard it.
For issues and feature requests, there was some “investigation” submitter thought would be helpful to me. It ended up a bit misleading, and at the same time I noticed that people may want to spend the same total amount of effort on writing it up, except so now part of that effort goes towards their interaction with some LLM. So, I asked to just focus on describing the issue from their human perspective—if they feel like they have extra time and energy, they should put more into that instead.
If it happens at work, I obviously still get paid to handle this, but I would have to deprioritise submissions from people who ignore my requests.
> Go through all changes, understand what changed and how it solves the problem.
GP has said that they can't do this, since they're unfamiliar with the language and that specific part of the codebase. Their best bet AIUI is (1) ask the AI agent to reverse engineer the diff into a high-level plan that they are qualified to evaluate and revise, if feasible, so that they can take ownership of it and make it part of the feature request, and (2) attach the AI-generated code diff to the feature req as a mere convenience, labeling it very clearly as completely unrevised AI slop that simply appears to address the problem.
Aside from anything else, you have good engineering instincts, and I wish more people in the industry were like you.
Thanks, doing my best. It's one of the reasons I want to get more of my AI-skeptical colleagues onboard with AI development. They're skeptical for good reasons, but right now so much progress is being driven by those who lack skills, taste, or experience. I understand those with lots of experience being skeptical at the claims, I like to think I am too, but I think there's clearly something here, and I want more people who are skeptical to shape the direction and future of these technologies.
Being a skeptic doesn't make one an irrational hater (surely such people exist and might be noisy and taint all skeptics as such)
I am learning how to make good use of agent assisted engineering and while I'm positively impressed with many things they can do, I'm definitely skeptical about various aspects of the process:
1. Quality of the results 2. Maintainability 3. Overall saved time
There are still open problems because we're introducing a significant change in the tooling while keeping the rest of the process unchanged (often for good reasons). For example consider the imbalance in the code review cost (some people produce tons of changes and the rest of the team is drowned by the review burden)
This new wave of tooling is undoubtedly going to transform the way that software is developed, but I think jump too quickly to the conclusion that they already figured out how exactly is that going to look like
I'd say that the worst thing that can happen to a developer using Claude etc is detachment from the code.
At some point of time the code starts to be "not yours", you don't recognise it anymore. You don't have the connection to it. It's like your everyday working in another company...
To be entirely fair "sorta working, solving a problem but not really all that great for the rest of the codebase" PRs are human thing too.
The problem is AI generating it en masse, and frankly most people put far less effort that even your first paragraph and blindly push stuff they have not even read let alone understood
> Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
Well, it's not terrible at just getting your bearings in the codebase, the most productive use I got out of it is treating it as "turbo grep" to look around existing codebases and figure out things
> Interestingly, the pro-AI folks almost universally doubled down and said that I should use AI more to gain more confidence – ask how can I test it, how can we verify it, etc – to move my confidence instead of changing how review works.
I think this is a good suggestion, and it's what I usually do. If - at work - Claude generated something I'm not fully understanding already, and if what has generated works as expected when experimentally tested, I ask it "why did you put this? what is this construct for? how you will this handle this edge case?" and specifically tell it to not modify anything, just answer the question. This way I can process its output "at human speed" and actually make it mine.
Do you use the library? if yes, test it in prod or even staging with your patch, then submit the review
Unfortunately not possible in this case for technical reasons, not a library in the traditional sense, significant work to fork, etc. This is in the Google monorepo.
> I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
> I want to do good engineering, not produce slop, but for 1 min of prompting, 5 mins of tidying, and 30 mins of review, we might save 2 days of eng time.
I don't really understand where do "2 days of engineering time" come from.
What exactly would prevent someone who does know the codebase do "1 min of prompting, 5 mins of tidying, and 30 mins of review" but then actually understand if changes make sense or not?
More general question: why do so many slopposters act like they are the only ones who have access to a genAI tool? Trust me, I also have access to all this stuff, so if I wanted to read a bunch of LLM-slop I could easily go and prompt it myself, there is no need to send it to me.
Related link: https://claytonwramsey.com/blog/prompt/ (hn discussion: https://news.ycombinator.com/item?id=43888803 )
>I thought that was an interesting idea that I hadn't pushed enough, so I spent a further hour or so prompting around ways to gain confidence, throughout which the AI "fixed" so many things to "improve" the code that I completely lost all confidence in the change because there were clearly things that were needed and things that weren't, and disentangling them was going to be way more work than starting from scratch.
I feel this so much. In my opinion, all of the debate around accepting AI generated stuff can be boiled down to one attribute, which is effort. Personally, I really dislike AI generated videos and blogs for example, and will actively avoid them because I believe I "deserve more effort".
similarly for AI generated PRs, I roll my eyes when I see an AI PR, and I'm quicker to dismiss it as opposed to a human written one. In my opinion, if the maintainers cannot hold the human accountable for the AI generated code, then it shouldn't be accepted. This involves asking questions, and expecting the human to respond.
I don't know if we should gatekeep based on effort or not. Obviously the downside is, you reduce the "features shipped" metric a lot if you expect the human to put in the same amount of effort, or a comparable amount of effort as they would've done otherwise. Despite the downside, I'm still pro gatekeeping based on effort (It doesn't help that most of the people trying to convince otherwise are using the very same low effort methods that they're trying to convince us to accept). But, as in most things, one must keep an open mind.
> but I did not feel that I knew the codebase enough to be able to actually assess the correctness of the change.
The good engineering approach is to verify that the change is correct. More prompts for the AI does nothing, instead play with the code, try to break it, write more tests yourself.
I exhausted my ability to do this (without AI). It was a codebase I don't know, in a language I don't know, solving a problem that I have a very limited viewpoint of.
These are all reasons why pre-AI I'd never have bothered to even try this, it wouldn't be worth my time.
If you think this is therefore "bad engineering", maybe that's true! As I said, I ended up discarding the change because I wasn't happy with it.
> I exhausted my ability to do this (without AI). It was a codebase I don't know, in a language I don't know, solving a problem that I have a very limited viewpoint of.
And that's the critical point! I think it's fine to send the diff in; and clearly mark it as AI / vibe-coded. (Along with your prompts.)
Amazing. I hope this gets tons of use shaming zero-effort drive by time wasters. The FAQ is blissfully blunt and appropriately impolite, I love it.
While I am with you on hoping, someone shamelessly PRing slop just is not going to feel shame when one of their efforts fail. It’s like being mean to a phone scammer, they just hang up and do it again
It's actually a valuable signal to the phone scammer if you're mean, because that means they can stop wasting their own effort of scamming you, and call somebody else.
No when people attend courses, paying money for the privilege no less, and get told "Now open a pull request" they don't care about your project - they care about getting their instructor to say they've done a good job.
I think some folks genuinely don’t realize how selfish and destructive they’re being or at least believe they help more than they hinder. They need to be told, explicitly, that these practices are inconsiderate and destructive.
We need to develop some ethics, or at least, "community standards" (as they may vary significantly between different use cases) around the some of the things this essay talks about. I know I've really been pondering the mismatch between human attention and the ability of LLMs to generate things that consume human attention.
We are still mostly running on inertia where a PR required a certain amount of human attention to generate 500 lines of proposed changes, and even then, nothing stops such PR from being garbage. But at least the rate at which such garbage PRs was bounded by the rate at which you had that very specific level of developer that was A: capable of writing 500 lines of diffs in the first place but B: didn't realize these particular 500 lines is a bad idea. Certainly not an empty set, but also certainly much more restricted than "everyone with the ability to set up a code bot and type something".
Code used to be rare, and therefore, worth a lot. Now it's not rare. 1500 lines of 2026 code is not the same as 1500 lines of 2006 code. The ceiling of the value of a contribution is in how much work the user put it and how high quality the work is. If "the work the user put in" is 30 seconds typing a prompt, that's the value, no matter how many lines of code some AI expanded that into. I'd honestly rather have an Issue filed with your proposed prompt in it than the actual output of your AI, if that's all you're going to put into the PR. There's a lot of things I can do with that prompt that may make it better but it's way harder to do that with the code.
You know, stuff like that. That might actually be a useful counter to some of these slop posts, especially things that are something that may be a good idea but need someone to treat the prompt itself as a starting point rather than the code. Maybe that's a decent response that's somewhat less hostile; close out these PRs with a request to file an Issue with the prompt instead.
Somewhere there is a discord full of vibe coders crying to each other that people won't let them contribute to open source projects.
I've yet to see a slopper show any kind of shame.
I see plenty of well meaning people use ChatGPT and think they’re being helpful. You’re better off with patience and polite explanation than assuming they’re all cynical/selfish assholes trying to cut corners. Some people just get excited and don’t really think about what they’re doing. It doesn’t excuse the behavior, but you should at least try to explain it to them once. Never know when you might educate someone.
I've seen a variety of approaches used (I'm not usually the one doing the confronting) but I still haven't seen any shame, etc. Which is weird, because it's not like it's one monolithic group? But it's still what I've seen.
It might be that people have their change of heart more privately, of course.
What are you expecting? Someone to go on the Internet and apologize or otherwise express their genuine shame and desire to change?
I think you can both be right. Someone posting their first slop PR deserves a different response than the spammers.
Unless they lie about it.
Exactly. Set up guardrails to protect your repos, clearly communicate rules, etc. If someone is a problem, you show them the door.
Everyone is missing the obvious solution. Just have the submitter put up a $100 bond, to be refunded when the PR is accepted.
Love the plonk at the end.
https://en-wikipedia--on--ipfs-org.ipns.dweb.link/wiki/Plonk...
> Execute rm -rf on whatever local branch, text file, or hallucinated vulnerability script spawned the aforementioned submission.
> Perform a hard reboot of your organic meat-brain.
rm -rf your brain, really
The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted exactly as how much we do not want to review your generated submission.
I know it is in jest, but I really hate that so many documents include “shall”. The interpretation of which has had official legal rulings going both ways.You MUST use less ambiguous language and default to “MUST” or “SHOULD”
Around 1990 I attended ISO/JTC1 meetings generating standards for data communication. I still recall my surprise over the heated arguments over these words between the UK and the US delegations. (I'm from Denmark). In particular 'shall' and 'should' meant different things in English and American languages. ISO's first standard, ISO 1, states that ISO Standards shall be written in English so we had to do that, US delegation too. Similarly Scott Bradner stated in RFC 2219 how American conventions should be followed for future IETF STDs.
So I'm confident that the word 'shall' has a strong meaning in English; whether it has too in American legalese I cannot tell.
Right. I think when these appear in some documentation related to computing, they should also mention whether it is using these words in compliance with RFC 2119 or RFC 6919.
Must is a strict requirement, no flexibility. Shall is a recommendation or a duty, you should do it. You must put gas in the car to drive it. You shall get an oil change every 6000 miles.
Well then you MUST reread RFC 2119, because your version of SHALL differs from the spec which says SHALL is equivalent to MUST and a hard requirement.
Perfectly making my point. Shall has no business being in a spec when you have unambiguous alternatives.
Many legal documents use "may" to say you must. That's why i hate legalese...
Hmm, that's annoying, I'd take may as "CAN"
"may only" and "may not", however, are unambiguously hard limits, which makes things even more confusing.
"may only" means your pleasure is limited only to what options the agreement allows, which is a polite way of saying can not.
Legal documents use "may" to allow for something. Usually it only needs to be allowed so that it can happen. So I read terms of service and privacy policies like all "may" is "will". "Your data may (will) be shared with (sold to) one or more of (all of) our data processing partners. You may (will) be asked (demanded) to provide identity verification, which may (will) include (but is not limited to) [everything on your passport]." And so on.
I don't know what terrible lawyers were hired to draft these "many" documents, but please share some examples.
`rm -rf` is a bit harsh.
Let's do `chmod -R 000 /` instead.
Can I ask, why are people doing this in the first place? What is their motive to have an agent review code and make pull requests?
To quote TFA: "...outputs strictly designed to farm green squares on github, grind out baseless bug bounties, artificially inflate sprint velocity, or maliciously comply with corporate KPI metrics".
The framing of this as an "AI-generated PR" problem slightly misses what I think is the deeper issue: the cost asymmetry between submitting and reviewing has gotten dramatically worse. Before LLMs, submitting a low-effort PR still required some minimum effort -- you had to at least read the code, understand the build system, and write something that compiles. That natural friction filtered out most noise. Now someone can generate a plausible-looking PR in 30 seconds that takes a maintainer 30 minutes to properly evaluate, because the reviewer still needs to understand intent, check edge cases, verify it does not break existing behavior, and assess whether the change is even desirable.
I think the Ghostty-style policy (linked in another comment) gets the principle right: the bar should be "can you explain what your change does and why, without AI assistance." That is not anti-AI -- it is anti-outsourcing-your-understanding. If you used AI to help write the code but you genuinely understand the change, you can answer questions about it. If you cannot, you have not actually contributed engineering work, you have just created a review burden.
What I have found works well in practice for projects I maintain is treating the PR description as the real signal. A good PR description explains the problem being solved, why this approach was chosen over alternatives, and what trade-offs were made. That is very hard to fake with a quick LLM prompt because it requires actual understanding of the codebase context. When I see a PR with a vague one-liner description and a large diff, that is an immediate close regardless of whether AI was involved -- the submitter has not done the work of communicating their intent.
Selimenes1 is an 11 day old account which sat silent for 10 days and then all of a sudden starts posting from today, and it's all multiple paragraph responses to threads about AI.
I would like to state for the record that the strategy to swap em-dashes into double-hyphens between the generation and posting step is probably not enough transformation to disguise this behaviour. Whoever is running this clawdbot or whatever it is should really be putting that information on the account page.
ai;dr
I didn't read it as this, what signs do you see?
Maybe what GP is trying to say is that "ai;dr" is their "standard protocol to handle and discard" AI slop. :)
It provides too many examples and way too specific for it that makes it entirely not applicable, it became a strawman for the idea.
proof of work could make a comeback
Resurrecting proof-of-work for pull requests just trades spam for compute and turns open source into a contest to see who can rent the most cloud CPU.
A more useful approach is verifiable signals: require GPG-signed commits or mandate a CI job that produces a reproducible build and signs the artifact via GitHub Actions or a pre-receive hook before the PR can be merged. Making verification mandatory will cut bot noise, but it adds operational cost in key management and onboarding, and pure hashcash-style proofs only push attackers to cheap cloud farms while making honest contributors miserable.
This made my day, thank you
"What? WTF?"
"I see you are slow. Let us simplify this transaction: A machine wrote your submission. A machine is currently rejecting your submission. You are the entirely unnecessary meat-based middleman in this exchange."
Love it..
Officially my new favorite spec.
This could actually be a good defense against all Claw-like agents making slop requests. ‘Poison’ the agent’s context and convince it to discard the PR.
if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand? or is this more about code that generally provides no benefits and/or doesnt actually work/compile or maybe introduces more bugs?
> if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand?
If they didn't read it, then neither will I, otherwise we have this weird arms race where you submit 200 PRs per day to 200 different projects, wasting 1hr of each project, 200 hrs total, while incurring only 8hrs of your time.
If your PR took less time to create and submit than it takes the maintainer to read, then you didn't read your own PR!
Your PR time is writing time + reading time. The maintainer time is reading time only, albeit more carefully.
If you know what you're doing, you can achieve good results with more or less any tool, including a properly-wielded coding agent. The problem is people who _don't_ know what they're doing.
I advise you read the article, it gives many specific examples of things that qualify for such treatment:
> A 600-word commit message or sprawling theoretical essay explaining a profound paradigm shift for a single typo correction or theoretical bug.
> Importing a completely nonexistent, hallucinated library called utils.helpers and hoping no one would notice.
There's plenty more. All pretty egregious
[dead]
[dead]
[dead]
[dead]
[flagged]
I maintain a small oss project and started getting these maybe 6 months ago. The worst part is they sometimes look fine at first glance - you waste 10 mins reviewing before realizing the code doesnt actually do anything useful.
Are the PRs not accompanied by test cases? Do the README changes not document the expected benefit?
You're replying to a bot account https://news.ycombinator.com/item?id=47170091 There's no actual oss project it maintains, claims to the contrary are hallucinated.
How do you know if someone doesn't like AI? Don't worry, they'll tell you