Dead Internet Theory
kudmitry.com406 points by skwee357 17 hours ago
406 points by skwee357 17 hours ago
My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.
If there are ad incentives, assume all content is fake by default.
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
That’s not because it’s decentralized or open, it’s because it doesn’t matter. If it was larger or more important, it would get run over by bots in weeks.
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
Just twenty minutes ago i got a panic call that someone was getting dozens of messages that their virusscanner is not working and they have hundreds of viruses. By removing Google Chrome from sending messages to the Windows notification bar everything went back to normal on the computer.
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
Maybe i should setup a Pi-Hole business...
I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.
My favorite theory about those subreddits is that it's the AI companies getting free labeling from (supposed) authentic humans so they can figure out how to best tweak their models to fool more and more people.
Those subreddits label content wrong all the time. Some of top commentors are trolling (I've seen one cooking video where the most voted comment is "AI, the sauce stops when it hits the plate"... as thick sauce should do.)
You're training yourself with a very unreliable source of truth.
> Those subreddits label content wrong all the time.
Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.
> You're training yourself with a very unreliable source of truth.
I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.
If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?
Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?
Before photography, we knew something was truthful because someone trustworthy vouched for it.
Now that photos and videos can be faked, we'll have to go back to the older system.
It was always easy to fake photos too. Just organize the scene, or selectively frame what you want. There is no such thing as any piece of media you can trust.
The construction workers having lunch on the girder in that famous photo were in fact about four feet above a safety platform; it's a masterpiece of framing and cropping. (Ironically the photographer was standing on a girder out over a hundred stories of nothing).
Ah yes the good old days of witch trials and pogroms.
I am no big fan of AI but misinformation is a tale as old as time.
"I may have to accept that this is just the new reality - never quite knowing the truth."
Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)
I’m really hoping that we’re about to see an explosion in critical thinking and skepticism as a response to generative AI.
Any day now… right?
I show my young daughter this stuff and try to role model healthy skepticism. Critical thinking YT like Corridor Crew’s paranormal UFO/bigfoot/ghosts/etc series is great too. Peer pressure might be the deciding factor in what she ultimately chooses to believe, though.
One can hope!
Yeah, one can. But then I see people just accepting the weak google search AI summary as plain facts and my hope fades away.
I think the broader response and re-evaluation is going to take a lot longer. Children of today are growing up in an obviously hostile information environment whereas older folk are trying to re-calibrate in an environment that's changing faster than they are.
If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.
What if AI is running RealOrAI to trick us into never quite knowing the truth?
As they say, the demand for racism far outstrips the supply. It's hard to spend all day outraged if you rely on reality to supply enough fodder.
This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).
In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.
Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
> Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.
People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.
In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.
ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.
> In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).
I agree with grandparent and think you have cause and effect backwards: people really do want to be outraged so Facebook and the like provide rage bait. Sometimes through algos tuning themselves to that need, sometimes deliberately.
But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.
I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.
The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.
> people really do want to be outraged
No, they do not. Nobody[1] wants to be angry. Nobody wakes up in the morning and thinks to themselves, "today is going to be a good day because I'm going to be angry."
But given the correct input, everyone feels that they must be angry, that it is morally required to be angry. And this anger then requires them to seek out further information about the thing that made them angry. Not because they desire to be angry, but because they feel that there is something happening in the world that is wrong and that they must fight.
[1]: for approximate values of "nobody"
If you think for a bit on what you just wrote, I’m pretty sure you’re agreeing with what they wrote.
You’re literally saying why people want to be angry.
I suppose the subtlety is that people want to be angry if (and only if) reality demands it.
My uneducated feeling is that, in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed.
But that doesn't mean that people want to be angry in general, in the sense that if there's nothing in reality to be angry about then that's even better. But if someone is presented with something to be angry about, then that ship has sailed so the typical reaction is to feel the need to engage.
>in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed
Yes, I think this is exactly it. A reaction that may be reasonable in a personal, real-world context can become extremely problematic in a highly connected context.
It's both that, as an individual, you can be inundated with things that feel like you have a moral obligation to react. On the other side of the equation, if you say something stupid online, you can suddenly have thousands of people attacking you for it.
Every single action seems reasonable, or even necessary, to each individual person, but because everything is scaled up by all the connections, things immediately escalate.
If people are bored, they’ll definitely seek out things that make them less bored. It’s hard to be less bored than when you’re angry.
There's a difference between wanting to be angry and feeling that anger is the correct response to an outside stimulus.
I don't wake up thinking "today I want to be angry", but if I go outside and see somebody kicking a cat, I feel that anger is the correct response.
The problem is that social media is a cat-kicking machine that drags people into a vicious circle of anger-inducing stimuli. If people think that every day people are kicking cats on the Internet, they feel that they need to do something to stop the cat-kicking; given their agency, that "something" is usually angry responses and attacks, which feeds the machine.
Again, they do not do that because they want to be angry; most people would rather be happy than angry. They do it because they feel that cats are being kicked, and anger is the required moral response.
And if you seek out (and push ‘give me more’ buttons on) cat kicking videos?
At some point, I think it’s important to recognize the difference between revealed preferences and stated preferences. Social media seems adept at exposing revealed preferences.
If people seek out the thing that makes them angry, how can we not say that they want to be angry? Regardless of what words they use.
And for example, I never heard anyone who was a big Fox News, Rush Limbaugh, or Alex Jones fan who said they wanted to be angry or paranoid (to be fair, this was pre-Trump and awhile ago), yet every single one of them I saw got angry and paranoid after watching them, if you paid any attention at all.
>If people seek out the thing that makes them angry, how can we not say that they want to be angry?
Because their purpose in seeking it out is not to get angry, it's to stop something from happening that they perceive as harmful.
I doubt most people watch Alex Jones because they love being angry. They watch him because they believe a global cabal of evildoers is attacking them. Anger is the logical consequence, not the desired outcome. The desired outcome is that the perceived problem is solved, i.e. that people stop kicking cats.
You may be vastly overestimating average media competence. This is one of those things where I'm glad my relatives are so timid about the digital world.
I hadn't heard that saying.
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
(Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )
Did you just justify generating racist videos as a good thing?
Is a video documenting racist behavior a racist or an anti-racist video? Is faking a video documenting racist behavior (that never happened) a racist or an anti-racist video? Is the act of faking a video documenting racist behavior (that never happened) or anti-racist behavior?
Video showing racist behavior is racist and anti-racist at the same time. A racist will be happy watching it, and anti-racist will forward it to forward their anti-racist message.
Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.
If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.
Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!
I don't think so. I was trying to respond to a comment in a way that was diplomatic and constructive. I can see that came out unclear.
Think they did the exact opposite
> Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered
Well yes, that's what he wrote, but that's like saying: stealing can be done for variety of reasons, including by someone who intends the theft to be discovered? Killing can be done for variety of reasons, including by someone who intends the killing to be discovered?
I read it as "producing racist videos can sometimes be used in good faith"?
They're saying one example of a reason someone could fake a video is so it would get found out and discredit the position it showed. I read it as them saying that producing the fake video of a cop being racist could have been done to discredit the idea of cops being racist.
There is significant differences between how the information world and the physical world operate.
Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.
But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.
I think they’re just saying we should interpret this video in a way that’s consistent with known historical facts. On one hand, it’s not depicting events that are strictly untrue, so we shouldn’t discredit it. On the other hand, since the video itself is literally fake, when we discredit it we shouldn’t accidentally also discredit the events it’s depicting.
Are you saying that if there is 1 instance of a true event, then fake videos done in a similar way as this true event is rational and needed?
The insinuation that racism in the US is not systemic reeks of ignorance
Edit: please, prove your illiteracy and lack of critical thinking skills in the comments below