It was reported that 67% of all internet traffic is bots - and that was back in 2021. Thanks to AI, that figure could apparently hit 90% by the end of the decade. GenAI expert, Nina Schick, goes as far as estimating that the 90% of web content itself will be generated by AI by 2025 - that's just next year!
AI content isn't all bad though. As highlighted in Google's guidelines, it can be really helpful – automated content has been used to benefit people for a long time:
It's important to recognize that not all use of automation, including AI generation, is spam....AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.
However, when AI content isn't fact checked, or used shadily, it can quickly clog search engines and websites with mindless content – which is exactly what spammers aim to do.
If the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this...
This is actually what I've experienced running Prototypr. Just in the last few months, over 20,000 fake profiles were created, and 1000s of articles submitted. All with the goal of generating traffic and gaming search engine algorithms. The bots were so relentless, I had to make the entire website invite only for contributors.
Whilst it's true that there are many reasons to create fake content (e.g. political, or spreading disinformation), this post looks at it purely from the perspective of 'generative traffic'. We'll look into:
How is AI being used to game algorithms and trick search engines in attempt to bump up views using fodder content?
The underworld economy of spammers using AI, and an overview of why they use it
What we can do to keep the quality of our sites high in the midst of AI spam
AI Noise is Getting Louder!
Spam profiles are not a new problem, we've seen them for years all over Twitter, Dribbble, Facebook etc - you name it. In fact, back when acquiring Twitter, big Elon was skeptical of the number of bots he was buying:
Mr. Musk claimed that Twitter balked at handing over information about spam bots, also known as fake accounts, on the platform.
He repeatedly said he did not believe the company’s public statements that roughly 5 percent of its active users are bots. Twitter intentionally misled the public, he said, and obstructed his efforts to get more information about how it accounts for the figures.
It's clear bots have always been there, but we're now seeing the 'bad actor bots' become much louder, evident from the volume of 'AI noise'. We're now wading through fake bot interactions, spam comments, and even bots talking to bots.
Fake profiles and content are exponentially easier to generate with AI, and they're not only used to game social media algorithms, but search engine crawlers too.
The internet is filling up with "zombie content" designed to game algorithms and scam humans.
It's becoming a place where bots talk to bots, and search engines crawl a lonely expanse of pages written by artificial intelligence (AI).
Letter
The Bot Economy
Where are the bots coming from?
It turns out there's a whole underworld economy dedicated to the sale and purchase of bots, and the sheer scale of it makes them challenging for platforms to combat. Here's one man exposing the Product Hunt bot economy in particular (there are in fact many niche bot economies known to man):
This is a tweet embed February 14, 2023
John Rush is the founder of MarsX, and often launches new products such as Unicorn Platform on Product Hunt (a tech community). In his thread on X, he shows how he was targeted by shady bot sellers, exposing how widely their dodgy services are used.
Product Hunt has apparently become 'pay to win':
The Currency of the Web
Just like buying and selling votes, the Bot Economy is also well-versed on the currency value of a Web link - and this could be detrimental to the quality of the Web. Now that it's super cheap to 'produce bullshit' content (as Erik Hoel puts it), we're in for all sorts of SEO shenanigans as 'mounds of clickbait garbage' is used to generate ad revenue.
Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump
The Botlink Economy
This is what I got hit with on Prototypr – bots and spammers were deployed specifically to win 'backlinks' from my site to theirs. As outlined in Mozilla's web principles, links are one of the core pillars of the Web - they're seen as 'the currency of the Web' and a symbol of trust.
Search Engines have always used links to determine the position of a website in a results page (or influence its PageRank score) - with the intention to direct users to the most useful content.
Therefore, graduates from the academy of spam do whatever it takes to earn that precious 'currency of the web'. Each outgoing link that directs visitors to their site is effectively a recommendation from you to them. They'll do absolutely anything to get it, except write a decent article.
ChatGPT and its users are currently generating more text than has ever appeared in every physical book ever written, every two weeks.Kyle Hill, Generative AI: We Aren't Ready
Links worked so well until the commercialisation of the web, and the economic value of a link. And now with bullshit machines, we'll have AI generated articles pointing to other generated content - SEO farms on AI steroids.
Bot Juice
What can platforms do to protect themselves from bot activity and maintain high quality standards?
It's actually been argued that some platforms don't prioritise bot issues in their problems to solve because the amount of activity helps boost platform usage figures. When it comes to selling ads, bot juice inflates numbers and activity, making them seem more attractive to advertisers. That's something that was touched on in the earlier section, and why Elon wanted to see the number of bot users vs real active users.
Another example of bot juice is when developers for Call of Duty were accused of sneaking AI Bots into multiplayer games to boost activity:
Advertisers aren't blind to it though, this was a good read on it:
How to stop spam bots
So what can platforms do to protect themselves from bot activity and maintain high quality standards?
⭐ Verify/Paid Members
Prioritise verified members over the rest. When a member is verified, prioritising their content will make it more visible above any spam content if registration is open to anyone. For example, the first benefit offered by Twitter was 'Rocket to the top of replies'. They also added a verified tab to notifications which can reduce notification spam.
But 'does a dollar prove that you are Not A Bot'?
✅ Curate User Generated Content
Similar to prioritising verified members, you can prioritise user submitted content that you trust. For example, on Product Hunt, posts start off as 'unfeatured', with the featured section curated each day so that only quality links are promoted, with the rest under the 'all' section.
💌 Invite Only
Requiring users to have invite codes, and giving trusted members codes can help curate a safe community. Here's an example of how IndieHackers had an invite only period, and then opened up to everyone when the spam problem died down (see the first comment on the post):
Another new community is indiemaker.space - they're invite only too and free from spam:
A similar one is WIP.co:
Can AI stop AI?
Overall, most of the spam tactics are the same as ever, and have existed long before AI content tools became available. However, now with AI, the spam economy is thriving.
Have you had any bot issues?
Maybe AI can help to stop AI?
This is a tweet embed February 14, 2023
To finish, here is an awesome video on this topic by Kyle Hill: