Meta projected 10% of 2024 revenue came from scams
sherwood.news739 points by donohoe 4 days ago
739 points by donohoe 4 days ago
Alongside a password manager and keeping things up to date, using an ad blocker is truly a foundational security practice these days. The big advertising players simply have all of the wrong incentives to control this problem. They could massively reduce the volume of scams advertised on their networks, but it’d be worse for them on two fronts: they’d have to pay for more moderation, and they’d lose billions in revenue in the process. Shoulder surfing while a non-savvy user browses Facebook or YouTube without an ad blocker and engages with obviously fraudulent ads is painful.
I don't see how the yearly tech support I do with my parents at Christmas will not one day converge to an outright ban of the internet. I am now demoing the level of sofistication of AI powered scams, telling them that it is now entirely possible they will get a VIDEO CALL from me that's not actually me asking for God knows what in a very convincing way using my face and voice. I am scared and this close to setting up a secret passphrase in case they need to tell me appart from a clone.
My guess is the already-existing trend towards walled gardens will simply continue. When a public space is dangerous, people retreat into "safe" enclosed spaces.
- "Never download anything unless it's from the Apple App Store"
- "Never buy anything unless you're on amazon.com"
- "Dont use the internet outside of ChatGPT"
Yes, but observe how that for all three of the things that immediately came to your mind, you have respectively 1. a thing that still has a lot of scams in it (though it may be the best of the three) [1] 2. A thing so full of scams and fake products that using it is already a minefield (one my mother-in-law is already incapable of navigating successfully, based on the number of shirts my family has gotten with lazy-AI-generated art [2]) and 3. a thing well known for generating false statements and incorrect conclusions.
I'm actually somewhat less critical of Apple/Google/Facebook/etc. than probably most readers would be, on the grounds that it simply isn't possible to build a "walled garden" at the scale of the entire internet. It is not possible for Big Tech to exclude scammers. The scammers collectively are firing more brain power at the problem than even Big Tech can afford to, and the game theory analysis is not entirely unlike my efforts to keep my cat off my kitchen counter... it doesn't matter how diligent I am, the 5% of the time the cat gets up there and finds a tasty morsel of shredded cheese or licks some dribble of something tasty barely large enough for me to notice but constitutes a nice snack with a taste explosion for the much-smaller cat means I'm never going to win this fight. The cat has all day. I'm doing dozens of other things.
There's no way to build a safe space that retains the current size and structure of the current internet. The scammers will always be able to overpower what the walled garden can bring to bear because they're so many of them and they have at least an order of magnitude more resources... and I'm being very conservative, I think I could safely say 2 and I wouldn't be really all that surprised if the omniscient narrator could tell us it's already over 3.
[1]: https://9to5mac.com/2025/09/25/new-study-shows-massive-spike...
[2]: To forstall any AI debate, let me underline the word "lazy" in the footnote here. Most recently we received a shirt with a very large cobra on it, and the cobra has at least three pupils in each eye (depending on how you count) and some very eye-watering geometry for the sclera between it. Quite unpleasant to look at. What we're getting down the pipeline now is from some now very out-of-date models.
I don’t accept the excuse it’s too hard. If they have to spend $10 billion per year to maintain an acceptable level trust on their platforms then so be it. It’s the cost of doing business. If I went into a mall and opened up a fake Wells Fargo bank branch it would be shut down pretty instantly by human intervention. These are the conditions most businesses run under. Why should these platforms given such leeway just because ‘it’s hard’? Size and scale shouldn’t be an excuse. If its not viable to prevent fraud then they don’t have a viable business.
We have laws on truth in advertising, and we should start holding advertising channels liable if they don't do enough due diligence.
Yes, it's not that it's impossible, it's that it's impossible while operating how they want to operate, scaling as much as they want to scale, and profiting as much as they want to profit. But no business model that can't be pursued ethically and profitably should be execused as simply inevitably unethical. It should be regulated and/or banned.
YouTube regularly shows me ads that fit that analogy quite well. The ECB and Elon Musk take turns offering me guaranteed monthly deposits in my account for one time 200 and 400 euro fees. The deep fakes are intentionally bad enough to filter for good victims.
You don't even need a human to review these ads but inserting one wouldn't be expensive.
But what actually is an acceptable level of trust? Acceptable for whom? For the billionaires, it's good enough if outside is worse, or even if it merely appears worse.
> It is not possible for Big Tech to exclude scammers
It's 100% possible. It might not be profitable
An app store doesn't have the "The optimum amount of fraud is not zero" problem. Preventing fraudulent apps is not a probability problem, you can actually continuously improve your capability without also blocking "good" apps accidentally.
Meanwhile, apple regularly stymies developers trying to release updates to already working and used by many apps for random things.
And despite that, they let through clear and obvious scams like a "Lastpass" app not made by Lastpass. That's just unacceptable. Anything with a trademark should never be possible to get a scam through. There's no excuse.
> Preventing fraudulent apps is not a probability problem
Unfortunately it is. You've even provided examples of a false positive and a false negative. Every discrimination process is going to have those at some rate. It might become very expensive for developers to go through higher levels of verification.
No, it's already a solved problem. For instance newspapers moderate and approve all content that they print. While some bad actors may be able to sneak scams in through classifieds, the local community has a direct way to contact the moderators and provide feedback.
The answer is that it just takes a lot of people. What if no content could appear on Facebook until it passed a human moderation process?
As the above poster said, this is not profitable which is why they don't do it. Instead they complain about how hard it is to do programmatically and keep promising they will get it working soon.
A well functioning society would censure them. We should say that they're not allowed to operate in this broken way until they solve the problem. Fix first.
Big tech knows this which is why they are suddenly so politically active. They reap billions in profit by dumping the negative externalities onto society. They're extracting that value at a cost to all of us. The only hope they have to keep operating this way is to forestall regulation.
Move fast and break things indeed.
> The answer is that it just takes a lot of people.
The more of those people you hire, the higher the chance that a bad actor will slip through and push malicious things through for a fee. If the scammer has a good enough system, they'll do this one time with one person and then move on to the next one, so now you need to verify that all your verifiers are in fact perfect in their adherence to the rules. Now you need a verification system for your verification system, which will eventually need a verification system^3 for the verification system^2, ad infinitum.
This is simply not true in every single domain. The fact people think tech is different doesn't mean it necessarily is. It might just mean they want to believe it's different.
At the end of the day, I can't make an ad and put it on a billboard pretending to be JP Morgan and Chase. I just can't.