cURL removes bug bounties
etn.se126 points by jnord 2 hours ago
126 points by jnord 2 hours ago
An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.
Then again, I once submitted a bug report to my bank, because the login method could be switched from password+pin to pin only, when not logged in, and they closed it as "works as intended", because they had decided that an optional password was more convenient than a required password. (And that's not even getting into the difference between real two-factor authentication the some-factor one-and-a-half-times they had implemented by adding a PIN to a password login.) I've since learned that anything heavily regulated like hospitals and banks will have security procedures catering to compliance, not actual security.
Assuming the host of the bug bounty program is operating in good faith, adding some kind of barrier to entry or punishment for untested entries will weed out submitters acting in bad faith.
Bug bounties often involve a lot of risk for submitters. Often the person reading the report doesn't know that much and misinterprets it. Often rules are unclear about what sort of reports are wanted. A pay to enter would increase that risk.
Honestly bug bounties are kind of miserable for both sides. I've worked on the recieving side of bug bounty programs. You wouldnt believe the shit that is submitted. This was before AI and it was significant work to sort through, i can only imagine what its like now. On the other hand for a submitter, you are essentially working on spec with no garuntee your work is going to be evaluated fairly. Even if it is, you are rolling the dice that your report is not a duplicate of an issue reported 10 years ago that the company just doesn't feel like fixing.
Pay to enter would increase the risk of submitting a bug report. However, if the submission fees were added to the bounty payable, then the risk reward changes in favour of the submitter of genuine bugs. You could even have refund the submission fee in the case of a good faith non bug submission. A little game theory can go a long way in improving the bug bounty system...
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.
I refer to this as the Notion-to-Confluence cost border.
When Notion first came out, it was snappy and easy to use. Creating a page being essentially free of effort, you very quickly had thousands of them, mostly useless.
Confluence, at least in west EU, is offensively slow. The thought of adding a page is sufficiently demoralizing that it's easier to update an existing page and save yourself minutes of request time outs. Consequently, there's some ~20 pages even in large companies.
I'm not saying that sleep(15 * SECOND) is the way to counter, but once something becomes very easy to do at scale, it explodes to the point where the original utility is now lost in a sea of noise.
It strange how sensitive humans are to these sort of relative perceived efforts. Having a charged, cordless vacuum cleaner ready to go and take around the house has also changed our vacuuming game. Because carrying a big unwieldy vacuum cleaner and needing to find a power socket at every location just feels like much more effort. Even though it really isn't.
I find this to be a very amusing critique. In my experience, Notion (when I stopped using it 3 years ago) was slow as molasses. Slow to load, slow to update. In comparison, at work, I almost exclusively favor Confluence Cloud. It's very responsive for me.
We have tons of Confluence wikis, updated frequently.
> Consequently, there's some ~20 pages even in large companies.
As someone working on Confluence to XWiki migration tools, I wish this was remotely true, my life would be way easier (and probably more boring :-)).
That anecdote is hilarious and scary in equal measures. Optional passwords are certainly more convenient than required ones, but so are optional PINs. The most convenient UX would be never needing to log in at all! Unless you find it inconvenient for others to have access to your bank account of course
I really hate the current trend of not having passwords. For example perplexity doesn't have a password, just an email verification to login.
I hate this as well, especially since I have greylisting enabled on some email addresses, so by the time the email login is delivered, the login session has already timed out and of course the sender uses different mail servers everytime. So in some cases, it's nearly impossible to login and takes minutes...
Agreed, although the reimbursement should be based on whether a reasonable person could consider that to be a vulnerability. Often it’s tricky for outsiders to tell whether a behaviour is expected or a vulnerability
> An entry fee that is reimbursed if the bug turns out to matter would stop this, real quick.
The problem is that bug bounty slop works. A lot of companies with second-tier bug bounties outsource triage to contractors (there's an entire industry built around that). If a report looks plausible, the contractor files a bug. The engineers who receive the report are often not qualified to debate exploitability, so they just make the suggested fix and move on. The reporter gets credit or a token payout. Everyone is happy.
Unless you have a top-notch security team with a lot of time on their hands, pushing back is not in your interest. If you keep getting into fights with reporters, you'll eventually get it wrong and you're gonna get derided on HN and get headlines about how you don't take security seriously.
In this model, it doesn't matter if you require a deposit, because on average, bogus reports still pay off. You also create an interesting problem that a sketchy vendor can hold the reporter's money hostage if the reporter doesn't agree to unreasonable terms.
I don’t think it works for curl though. You would guess that sloperators would figure out that their reports aren’t going through with curl specifically (because, well, people are actually looking into them and can call bullshit), and move on.
For some reason they either didn’t notice (e.g. there’s just too many people trying to get in on it), or did notice, but decided they don’t care. Deposit should help here: companies probably will not do it, so when you see a project requires a deposit, you’ll probably stop and think about it.
Triage gets outsourced because the quality of reports is low.
If filing a bad report costs money, low quality reports go down. Meanwhile anyone still doing it is funding your top notch security team because then they can thoroughly investigate the report and if it turns out to be nothing then the reporter ends up paying them for their time.
It seems open source loses the most from AI. Open source code trained the models, the models are being used to spam open source projects anywhere there's incentive, they can be used to chip away at open source business models by implementing paid features and providing the support, and eventually perhaps AI simply replaces most open source code
> they can be used to chip away at open source business models by implementing paid features and providing the support
There are a lot of things to be sad about AI, but this is not it. Nobody has a right to a business model, especially one that assumes nobody will compete with you. If your business model relies on the rest of the world bring sucky so you can sell some value-added to open-core software, i'm happy when it fails.
I wouldn't say open source code solely trained the models, surely there are CS courses and textbooks, official documentation as well as transcripts of talks and courses all factor in as well.
On another note, regarding AI replacing most open source code. I forget what tool it was, but I had a need for a very niche way of accessing an old Android device it was rooted, but if I used something like Disk Drill it would eventually crap out empty files. So I found a GUI someone made, and started asking Claude to add things I needed for it to a) let me preview directories it was seeing and b) let me sudo up, and let me download with a reasonable delay (1s I think) which basically worked, I never had issues again, it was a little slow to recover old photos, but oh well.
I debated pushing the code changes back into github, it works as expected, but it drifted from the maintainers own goals I'm sure.
"open source" and "business model" in the same sentence... next you're gonna tell me to eat pudding with a fork.
https://en.wikipedia.org/wiki/Business_models_for_open-sourc...
I guess you should try eating pudding with a fork next
I mean... not what the other poster meant, but https://en.wikipedia.org/wiki/Sticky_toffee_pudding exists and is absolutely delicious.
i believe that the existence of not for profit organizations is a valid counterpoint to whatever your argument is
The company I work for has a pretty bad bounty system (basically a security@corp email). We have a demo system and a public API with docs. We get around 100 or more emails a day now. Most of it is slop, scams, or my new favourite AI security companies sending us an AI generated pentest un prompted filled with false positives, untrue things, etc. It has become completely useless so no one looks at it.
I had a sales rep even call me up basically trying to book a 3 hour session to review the AI findings unprompted. When I looked at the nearly 250 page report, and saw a critical IIS bug for Windows server (doesn't exist) existing at a scanned IP address of 5xx.x.x.x (yes an impossible IP) publically available in AWS (we exclusively use gcp) I said some very choice words.
What I wonder is if this will actually reduce the amount of slop.
Bounties are a motivation, but there's also promotional purposes. Show that you submitted thousands of security reports to major open source software and you're suddenly a security expert.
Remember the little iot thing that got on here because of a security report complaining, among other things, that the linux on it did not use systemd?
I dont think bounties make you an "expert". If you want to be deemed an expert, write blogs detailing how the exploit works. You can do that without a bounty.
In many ways one of the biggest benefits of bug bounties is having a dedicated place where you can submit reports and you know the person on the other end wants them and isn't going to threaten to sue you.
For the most part, the money in a bug bounty isn't work the effort needed to actually find stuff. The exception seens to be when you find some basic bug, that you can automate scan half the internet and submit to 100 different bug bounties.
> I dont think bounties make you an "expert".
It depends to who.
> If you want to be deemed an expert, write blogs detailing how the exploit works.
That's necessary if you sell your services to people likely to enjoy HN.
I just read one of the slop submissions and it's baffling how anyone could submit these with a straight face.
https://hackerone.com/reports/3293884
Not even understanding the expected behaviour and then throwing as much slop as possible to see what sticks is the problem with generative AI.
A list of the slop if anyone is interested:
https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
In the second report, Daniel greeted the slopper very kindly and tried to start a conversation with them. But the slopper calls him by the completely wrong name. And this was December 2023. It must have been extremely tiring.
> slopper
First new word of 2026. Thank you.
All of those reports are clearly AI and it's weird seeing the staff not recognizing it as AI and being serious.
I thought the same, except I realised some of the reports were submitted back in 2023 before AI slop exploded.
I looked at two reports, and I can’t tell if the reports are directly from an ai or some very junior student not really understanding security. LLms to me sound generally more convincing.
Some (most?) are llm chat copy paste addressing non existing users in conversations like [0] - what a waste of time.
> To replicate the issue, I have searched in the Bard about this vulnerability.
Seeing Bard mentioned as an LLM takes me back :)
Honestly infuriating to read. I'm so surprised cURL put up with this for so long
related: cURL stopped HackerOne bug bounty program due to excessive slop reports https://news.ycombinator.com/item?id=46678710
Alternate headline: AI discovering so many exploits that cybersecurity can't keep up
Am I doing this right?
There is a difference between AI discovering real vulnerabilities (e.g. the ffmpeg situation), and AI being used to spam fake vulnerabilities
Just use an LLM to weed them out. What’s so hard about that?
Because LLMs are bad at reviewing code for the same reasons they are bad at making it? They get tricked by fancy clean syntax and take long descriptions / comments for granted without considering the greater context.
How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.
The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.
If AI can't be trusted to write bug reports, why should it be trusted to review them?
At this point it's impossible to tell if this is sarcasm or not.
Brave new world we got there.