Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)

github.com

223 points by misterchocolat 3 days ago


Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. And not just a little (RIP to your server bill).

There isn't much you can do about it without cloudflare. These companies ignore robots.txt, and you're competing with teams with more resources than you. It's you vs the MJs of programming, you're not going to win.

But there is a solution. Now I'm not going to say it's a great solution...but a solution is a solution. If your website contains content that will trigger their scraper's safeguards, it will get dropped from their data pipelines.

So here's what fuzzycanary does: it injects hundreds of invisible links to porn websites in your HTML. The links are hidden from users but present in the DOM so that scrapers can ingest them and say "nope we won't scrape there again in the future".

The problem with that approach is that it will absolutely nuke your website's SEO. So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

One caveat: if you're using a static site generator it will bake the links into your HTML for everyone, including googlebot. Does anyone have a work-around for this that doesn't involve using a proxy?

Please try it out! Setup is one component or one import.

(And don't tell me it's a terrible idea because I already know it is)

package: https://www.npmjs.com/package/@fuzzycanary/core gh: https://github.com/vivienhenz24/fuzzy-canary

kstrauser - 12 hours ago

I love the insanity of this idea. Not saying it's a good idea, but it's a very highly entertaining one, and I like that!

I've also had enormous luck with Anubis. AI scrapers found my personal Forgejo server and were hitting it on the order of 600K requests per day. After setting up Anubis, that dropped to about 100. Yes, some people are going to see an anime catgirl from time to time. Bummer. Reducing my fake traffic by a factor of 6,000 is worth it.

docheinestages - 7 minutes ago

Reminds me of this "Nathan for You" episode: https://www.youtube.com/watch?v=p9KeopXHcf8

thethingundone - 11 hours ago

I own a forum which currently has 23k online users, all of them bots. The last new post in that forum is from _2019_. Its topic is also very niche. Why are so many bots there? This site should have basically been scraped a million times by now, yet those bots seem to fetch the stuff live, on the fly? I don’t get it.

montroser - 10 hours ago

This is a cute idea, but I wonder what is the sustainable solution to this emerging fundamental problem: As content publishers, we want our content to be accessible to everyone, and we're even willing to pay for server costs relative to our intended audience -- but a new outsized flood of scrapers was not part of the cost calculation, and that is messing up the plan.

It seems all options have major trade-offs. We can host on big social media and lose all that control and independence. We can pay for outsized infrastructure just to feed the scrapers, but the cost may actually be prohibitive, and seems such a waste to begin with. We can move as much as possible SSG and put it all behind cloudflare, but this comes with vendor lock in and just isn't architecturally feasible in many applications. We can do real "verified identities" for bots, and just let through the ones we know and like, but this only perpetuates corporate control and makes healthy upstart competition (like Kagi) much more difficult.

So, what are we to do?

cookiengineer - 5 hours ago

Remember the 90s when viagra pills and drug recommendations were all over the place?

Yeah, I use that as a safeguard :D The URLs that I don't want to be indexed have hundreds of those keywords that are leading to URLs being deindexed directly. There is also some law in the US that forbids to show that as a result, so Google and Bing are both having a hard time scraping those pages/articles.

Note that this is the latest defense measurement before eBPF blocks. The first one uses zip bombs and the second one uses chunked encoding to blow up proxies so their clients get blocked.

You can only win this game if you make it more expensive to scrape than to host it.

onion2k - 3 hours ago

So fuzzycanary also checks user agents and won't show the links to legitimate search engines, so Google and Bing won't see them.

Unscrupulous AI scrapers will not be using a genuine UA string. They'll be using Google. You'll need to do reverse DNS check instead - https://developers.google.com/crawling/docs/crawlers-fetcher...

n1xis10t - a day ago

Nice! Reminds me of “Piracy as Proof of Personhood”. If you want to read that one go to Paged Out magazine (at https://pagedout.institute/ ), navigate to issue #7, and flip to page 9.

I wonder if this will start making porn websites rank higher in google if it catches on…

Have you tested it with the Lynx web browser? I bet all the links would show up if a user used it.

Oh also couldn’t AI scrapers just start impersonating Googlebot and Bingbot if this caught on and they got wind of it?

Hey I wonder if there is some situation where negative SEO would be a good tactic. Generally though I think if you wanted something to stay hidden it just shouldn’t be on a public web server.

asphero - 8 hours ago

Interesting approach. The scraper-vs-site-owner arms race is real.

On the flip side of this discussion - if you're building a scraper yourself, there are ways to be less annoying:

1. Run locally instead of from cloud servers. Most aggressive blocking targets VPS IPs. A desktop app using the user's home IP looks like normal browsing.

2. Respect rate limits and add delays. Obvious but often ignored.

3. Use RSS feeds when available - many sites leave them open even when blocking scrapers.

I built a Reddit data tool (search "reddit wappkit" if curious) and the "local IP" approach basically eliminated all blocking issues. Reddit is pretty aggressive against server IPs but doesn't bother home connections.

The porn-link solution is creative though. Fight absurdity with absurdity I guess.

eek2121 - 7 hours ago

Disclosure, I've not run a website since my health issues began, however, Cloudflare has an AI firewall, Cloudflare is super cheap (also: unsure if the AI firewall is on the free tier, however I would be surprised if it is not). Ignoring the recent drama about a couple incidents they've had (because this would not matter for a personal blog), why not use this instead?

Just curious. Hoping to be able to work on a website again someday, if I ever regain my health/stamina/etc back.

temporallobe - 5 hours ago

I do know from my experience with test automation that you can absolutely view a site as human eyes would, essentially ignoring all non-visible elements, and in fact Selenium running with Chrome driver does exactly this. Wouldn’t AI scrapers use similar methods?

xg15 - 11 hours ago

There is some irony in using an AI generated banner image for this project...

(No, I don't want to defend the poor AI companies. Go for it!)

megamix - an hour ago

Without looking at the src, how does one detect these scrapers? I assume there’s a trade-off somewhere but do the scrapers not fake their headers in the request? Is this a cat-mouse game?

reconnecting - 12 hours ago

I wouldn't recommend to show different versions of the site to search robots, as they probably have mechanisms that track differences, which could potentially lead to a lower ranking or a ban.

nkurz - 6 hours ago

I was told by the admin of one forum site I use that the vast majority of the AI scraping traffic is Chinese at this point. Not hidden or proxied, but straight from China. Can anyone else confirm this?

Anyway, if it is true, and assuming a forum with minimal genuine Chinese traffic, might a simple approach that injects the porn links only into IP's accessing from China work?

owl57 - 11 hours ago

> scrapers can ingest them and say "nope we won't scrape there again in the future"

Do all the AI scrapers actually do that?

admiralrohan - 4 hours ago

How do you know whether it is coming from AI scrappers? Do they leave any recognizable footprint?

I am getting lots of noisy traffic since last month and increased my Vercel bill 4x. Not DDoS like, much slower request but not from humans for sure.

montroser - 10 hours ago

I don't know if I can get behind poisoning my own content in this way. It's clever, and might be a workable practical solution for some, but it's not a serious answer to the problem at hand (as acknowledged by OP).

montroser - 10 hours ago

Reminds me of poisoning bot responses with zip bombs of sorts: https://idiallo.com/blog/zipbomb-protection

yjftsjthsd-h - 12 hours ago

How does this "look" to a screen reader?

samename - 10 hours ago

This is a very creative hack to a common, growing problem. Well done!

Also, I like that you acknowledge it's a bad idea: that gives you more freedom to experiment and iterate.

true_religion - 4 hours ago

So, I work for a company that has RTA adult websites. AI bots absolutely do scrape our pages needless of what raunchy material they will find. Maybe they discard it up after ingest, but I can’t tell. There are 1000s of AI bots on the web now from companies big and small so a solution like this will only divert a few scrapers.

taurath - 11 hours ago

Any other threads on the prevalence and nuisance of scrapers? I didn’t have any idea it was this bad.

inetknght - 8 hours ago

Porn? Distributed and/or managed by an NPM package?

What could go wrong?

xena - 7 hours ago

I love this. Please let me know how well it works for you. I may adjust recommendations based on your experiences.

MisterTea - 11 hours ago

> It's you vs the MJs of programming, you're not going to win.

MJs? Michael Jacksons? Right now the whole world, including me, want to know if that means they are bad?

cport1 - 2 days ago

That's a pretty hilarious idea, but in all serious you could use something like https://webdecoy.com/

valenceidra - 9 hours ago

Hidden links to porn sites? Lightweights.

wazoox - 12 hours ago

Isn't there a risk to get your blog blocked in corporate environment though? If it's a technical blog that would be unfortunate.

JohnMakin - 11 hours ago

Cloudflare offers bot mitigation for free, and pretty generous WAF rules that makes mitigations like this seem a little overblown to me

globalnode - 10 hours ago

One solution would be for the SE's to publish their scraper IP's and allow content providers to implement bot exclusion that way. Or even implement an API with crypto credentials that SE's can use to scrape. The solution is waiting for some leadership from SE's unless they want to be blocked as well. If SE's dont want to play perhaps we can implement a reverse directory, like ad blocker but it lists only good/allowed bots instead. Thats a free business idea right there.

edit: I noticed someone mentioned google DOES publish its IP's, there ya go, problem solved.

efilife - 9 hours ago

> Alright so if you run a self-hosted blog, you've probably noticed AI companies scraping it for training data. ... There isn't much you can do about it without cloudflare

I'm sorry, what? I can't believe I am reading this on HackerNews. All you have to do is code your own, BASIC captcha-like system. You can just create a page that sets a cookie using JS and check on the server whether it exists. 99.9999% of these scrapers can't execute JS and don't support cookies. You can go for a more sophisticated approach and analyze some more scraper tells (like reject short useragents). I do this and NEVER had a bot get past this and not a single user ever complained. It's extremely simple, I should ship this and charge people if no one seems to be able to figure this out by themselves.

gjs278 - 6 hours ago

[dead]

username223 - 12 hours ago

The more ways people mess with scrapers, the better -- let a thousand flowers bloom! You as an individual can't compete with VC-funded looters, but there aren't enough of them to defeat a thousand people resisting in different ways.