Quick Links
Generative AI continues to make waves, especially as social media platforms scramble to implement the technology. However, generative AI has come with some major drawbacks for social media users. Here are a few of them…
1A Flood of AI Slop
If you’re a frequent user of platforms like Facebook, you’ve likely come across AI slop.AI slop is junk output from generative AI, including surreal artwork and nonsensical recipes. There is now even anX account dedicated to AI slop on Facebook.
Often AI slop shows up from spam pages looking to go viral. And since Facebook recommends viral content, this means you may end up with more nonsense in your feed than usual. Generative AI has made it easier than ever to generate these types of spam posts and real content is getting flooded out on social media platforms as a result.
![]()
2Losing Even More Authenticity
Social media isn’t exactly known for authenticity. Often influencers show glimpses of their lives mostly aimed at projecting an image of perfection or to sell a product they’ve been paid to advertise.
But AI takes this lack of authenticity even further. TikTok is testing virtual influencers that will allow businesses to advertise to users using a digitally created avatar, according toSocialMediaToday. Meanwhile Instagram is testing a feature that allows influencers to create AI bots of themselves that will respond to fans in messages. The test was announced by Meta CEO Mark Zuckerberg in his Instagram broadcast channel.

This dilution of what little authenticity there is on social media is not only related to influencers. AI is being used to generate user posts on social media networks like Reddit and X (Twitter). While chatbots that can do this have existed for some time, the launch of large language models has made it more difficult to tell what’s real and what’s not. Every day that I use Reddit inevitably includes users accusing others of using AI to write a post or story.
3Social Media Missteps By AI
AI platforms from social media companies are still a work-in-progress, meaning that they still make mistakes. Some of these mistakes contribute to misinformation or reduce user trust in the platform.
For example, Meta AI replied to a post in a Facebook group claiming it was a parent with a “2e” (twice-exceptional) child enrolled in a program for Gifted and Talented students, as reported bySky News. Luckily since the responses of Meta AI are clearly identified on Facebook, users were able to tell that the response is not genuine. But it does throw into question the reliability of AI tools like Meta AI when inserting themselves into conversations they have no place in.
Meanwhile, Grok AI (X’s AI chatbot), has been called out for producing misinformation. One of these instances included accusing NBA player Klay Thompson of vandalizing houses after the AI misinterpreted the basketball slang that dubs shots that don’t go into the basket as “bricks”.
While some AI hallucinations are comical, other misinformation is more concerning and leads to real-world consequences.
Dead internet theoryis the idea that the vast majority of content on the internet is being generated by bots. While this was easy to scoff at in the past, it seems more and more plausible as we are flooded with AI spam and responses on social media platforms.
The fact that social media companies are also integrating bots as users makes this idea more realistic than before. It has even resulted in the launch ofButterflies AI, a social media platform where some of the users are actually just AI bots. While bots can be useful on social media, having them pose as other users just doesn’t appeal to me.
In terms of the daily user experience, generative AI has made it easier for spam bots to mimic real users. Recently I put out a post on X to commission a piece of art and my inbox was flooded with replies from bots. Telling the real users from the bots is becoming more difficult.
5People Need to Find Ways to Protect Their Content From AI Scraping
Users are finding a variety of ways to try and protect their content from being used in AI datasets. But it’s not always as simple as opting out. If your posts are public, chances are they’ve already been used to train AI.
As a result, users are trying out workarounds to protect their data. This includes switching to private profiles as well as data poisoning. Whileusing Nightshade to poison artworkdoesn’t have an impact on users seeing the images, other forms of data poisoning may have an effect on the content we see on social media.
If more users switch to private profiles on more public social networks, it will become harder to discover users and content that you like. And as artists move away from platforms that provide training data for generative AI, people who simply want to admire their work will miss out unless they move to niche platforms.
While there are some uses for generative AI on these platforms, some people would argue thatwe don’t really need generative AI on social media. But regardless of what we think, generative AI has already fundamentally changed social media in a number of ways.