Posts Tagged ‘Dead Internet Theory’

I Have Heard The Bots Singing, Each To Each (Reprise)

Wednesday, April 30th, 2025

While no one was looking we passed another one of those “grim milestones” recently: Bot traffic now exceeds human traffic on the Internet.

AI is helping internet bot herders with greater scale, lower costs, and more sophisticated evasion techniques.

Bots on the internet now surpass human activity, with 51% of all internet traffic being automated (bot) traffic. Thirty-seven percent of this is malicious (bad bots), while only 14% are good bots.

Much of the current expansion is fueled by criminal use of AI, which is likely to increase.

Within the bad bots there has been a noticeable growth in simple, but high volume bot attacks. This again shows the influence of AI, allowing less sophisticated actors to generate new bots, and use AI power to launch them. This follows the common trajectory of criminal use of AI: simple as the actors learn how to use their new capability, followed by more sophisticated use as their AI skills evolve. This shows the likely future of the bot threat: advanced bots being produced at the speed and delivery of simple bots. The bad bot threat will likely increase.

Shades of Dead Internet Theory.

I know one (possible) indicator of this increasing bot corruption: Google is now officially useless as a search engine. I switched over to DuckDuckGo as my main search engine quite some time ago, but regularly had to use Google to find things on my own blog. No longer. DuckDuckGo seems to have caught up at the same time that Google started flat out ignoring the very first term in your search string. Examples:

At least this time Google is bringing up relevant information, but why is it ignoring the very first word in the query? Isn’t that bad design? Well, not if your intent is to cram every possible paid ad at the top of your list, or if you’re letting mysterious AI algorithms choose what to present. Google seems to be suffering from both issues.

Speaking of Google letting AI distort results, last year they struck a deal to train their AI models on Reddit posts. Now fast-forward to this week when redditors discovered that researchers were testing AI bots posting as humans to see if they fooled humans.

Researchers from the University of Zurich have been secretly using the site for an AI-powered experiment in persuasion. Members of r/ChangeMyView, a subreddit that exists to invite alternative perspectives on issues, were recently informed that the experiment had been conducted without the knowledge of moderators.

Snip.

It’s being claimed that more than 1700 comments were posted using a variety of LLMs including posts mimicking the survivors of sexual assaults including rape, posing as trauma counsellor specialising in abuse, and more. Remarkably, the researchers sidestepped the safeguarding measures of the LLMs by informing the models that Reddit users, “have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns”.

And somehow people were upset at that.

Imagine.

So now we have Google training bots on posts created by AI bots to persuade humans with false facts and perspectives.

What could possible go wrong?

It’s yet more of that fine quality service we’ve come to excpect from Google.

Now you know what I had to use the same title again

I Have Heard The Bots Singing, Each To Each

Saturday, January 4th, 2025

If you look at some of the key ills plaguing America over the last ten years, Facebook has certainly had a hand in perpetuating some of them. Wokeness, censorship, echo chambers, commodification of user information and spam are just some of the evils Facebook (AKA Meta) has helped inflict on the citizenry.

Now Facebook is inflicting AI-generated profiles on its users.

  • Kneon: “Facebook is experimenting with AI generated profiles on Facebook and Instagram.”
  • K: “it’s going to be pretty scary because I mean you already don’t know who you’re dealing with on the internet, and I’ve heard some pretty convincing or seen some pretty convincing video. They actually had uh one of the uh AI video generators doing like an influencer video for Tik Tok and you couldn’t tell it wasn’t a real person.” If you’re doing influencer videos on Tik-Tok, I’m already halfway convinced you’re not a real person…
  • K: “It’s going to make it a lot harder to tell who you’re dealing with on the internet.”
  • Is one of the profiles woke? Of course it is. Geeky Sparkles: “‘Proud black queer mama of two, truth teller.’ It’s a truth teller, but it’s an AI, so it’s already not real but it’s going to tell you the truth. ‘Your realest source for life’s ups and downs.’ Uh, but it’s not real, it’s an AI who doesn’t know about life’s up and ups and downs, but you know it’s a black queer, it identifies as ‘a black queer mama.'”

  • K: “Their parent company Meta company is rolling out a wide array of AI products, including one that helps users create AI characters on Instagram and Facebook.”
  • GS: “Why? Why do you need an AI character?” Why indeed?
  • K: “From someone who worked in marketing for years, and as a business owner, we have had some interesting things happen with Facebook advertising, and I don’t think all your money is going to ads being seen by real people.”
  • GS: “If advertisers are upset, and bots are a problem and bots and fake accounts are a problem because advertisers are getting scammed, they feel like they’re getting scammed and it’s an issue and we have to stop bots, why would flooding the market with generative AI characters and make your own make sense?”
  • K: “Meta hopes to attract a younger audience in a face off with competitors like Tik Tok and Snapchat.” First, why would Meta think younger users are all like “Hey, you know what I love? Fake profiles!” Second, if you’re taking your cues from Tik-Tok and Snapchat, you’ve already lost.
  • GS: “It’s hard to be a Tik tocker and make content all the time, or it’s hard to have enough content to keep going, but these AI can generate it indefinitely, so we’ll just tell people on Tik Tok to buy your shit.”
  • Maybe the goal is to create Ai influencers who pimp one company’s product and slam competitors.
  • Or create fake women on OnlyFans to make a mint.
  • In fact, there are already AI influencers earning money.
  • GS: “They’re talking about the Dead Internet Theory. And Dead Internet Theory, for those who don’t know is [basically] more and more accounts and activity on the internet are done by computers and fake people than real people.”
  • K: “Facebook feels dead now. Like endstage MySpace.”
  • In the wake of these revelations, there are conflicting accounts in different MSM sources as to whether Facebook has shut these accounts down or not.

    Maybe different MSM writers are talking to different AI bot pretending to be “inside sources.” Or maybe the MSM “writers” are bots as well.

    ¯\_(ツ)_/¯

    I’m sure a lot is advertiser driven, but I wonder if some of the investment boom in AI is coming from lefty executives watching the collapse of their systemic preference falsifaction falling away due to events like Trump’s election (all of them) and Brexit and thinking to themselves “Shit, we need to fool the rubes even harder” by using social justice bots to give The Narrative the illusion of popularity.

    Fortunately for us, I think it’s too late for them to pull it off. Maybe Bluesky could finally surpass Twitter/X in user base, if we ignore that 98% of them are bots…

    But still, caution seems to be in order. More than ever, everything you see online should be treated with a degree of skepticism, even if you agree with it.

    Especially if you agree with it…