Early this January, while we were all looking the other way, the slop revolution unfolded right under our noses.
The warning bells, of course, had been sounding off for over a year. Boomers being tricked by AI-generated images of poor African children or disabled veterans. Surreal images like “Shrimp Jesus” going viral on Facebook. Fake “dancing cat” memes racking up millions of views on TikTok.
Slop was clearly already starting to make an impact, but most of our online content was still being made by humans, for humans.
That’s no longer the case.
If you go on Facebook or Instagram Reels today, AI-generated content has gone from being the anomaly to being the norm. Stock characters like LeBron James get tens of millions of views when edited into absurd scenarios like milking a cow or jumping out of an airplane or kissing baby Shrek. On the surface level, these videos seem funny, but their reality is far more sinister, starting with the fact that they utterly dominate the current social media landscape.
Scrolling through twenty Reels in my Explore Feed just now, I made a count, and fully sixteen of them were AI-generated. Admittedly, I’m being sent more slop than usual since I’m researching AI memes and the algorithm thinks I want to see them, but each of those Reels had millions of engagements from real users, meaning that those numbers wouldn’t be off base for many normal people.
This is not an accident, but instead reflects a deliberate decision by Meta to start flooding our feeds with more AI-generated content. Just as everyone was distracted by the news about the company ending its fact-checking program, executives quietly announced that they were also actively incentivizing AI accounts on Facebook and Instagram. Like every decision made by social media platforms, the ultimate motivation here was of course to make more profit—but it’s important to ask how they’re profiting.
As meme researcher Aidan Walker points out in his outline of slop capitalism, a main goal is to “crowd out actual human voices on platforms.” Every real creator replaced by an AI creator represents a reduction in how much money the platforms have to give back through influencer rewards programs.
We already know that Spotify has been stuffing its playlists with AI-generated music to avoid paying streaming revenue to artists, and that Google’s “AI Overview” feature is designed to replace actual content providers with summaries that can then incorporate advertisements. The same thing is now happening with entertainment content on social media: human influencers are losing market share to artificial ones.
Beyond immediate profit margins, social media platforms are also deliberately ushering in slop to impose cultural hegemony. Bear in mind that the industry already considers “content” as the end goal of social media, rather than the messages or ideas held inside the content. To them, it would be better if there weren’t even a message in the first place: they just want to produce more of more, so that users become passive consumers, entertained through a “culture industry” of constant online spectacle.
By removing human communicators, social media platforms remove the possibility of actual messaging interfering with the production of “content.” Users are left to consume distractions devoid of meaning, surrounded by an ecosystem of bots that create the illusion of social connectivity.
A year ago, I never would’ve written that sentence, because it directly embraces a formerly ludicrous conspiracy theory known as the “dead internet theory.” Today, though, when 80% of my feed is artificial slop, and we have evidence that Meta is experimenting with AI-generated comments, the conspiracy is looking a lot more like reality. Nor are they trying to hide it. In the words of Connor Hayes, Meta’s Vice President of Generative AI,
We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do...They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.
Hayes’ chilling words outline the most concerning feature of slop capitalism: that platforms are leveraging generative AI to replace actual discourse with a simulacrum of discourse. This pseudo-discourse will never have any intellectual substance; rather, it will simply fill up space on your feed, extracting value from your attention.
When coupled with loosened content restrictions, the slop influx also serves to foment divisive ideas. The more I led my algorithm down the AI LeBron James rabbit hole, for example, the more I began to encounter a slew of incredibly racist videos: in one Reel with over 12 million views, a swarm of shirtless Black men sprint toward a KFC and eat fried chicken as the background audio chants “run, n*gger, run.”
In another video under that audio from a separate account (this one with 19 million views), the shirtless men take part in a slave rebellion and then dine on watermelon; a third video with 12 million views has the men running toward an NBA arena where they stack basketballs and dine on more fried chicken.
Not only do platforms like Meta allow this kind of content, but they actively encourage it. Racist videos still drive user engagement through spectacle, which is kind of their entire business model. In the long run, it even helps the platforms to manufacture racially divisive discourse. The more polarized we are as a society, the harder it is to come together and challenge their hegemony.
Our best way of fighting back? Spend as little time on algorithmic media as possible, strengthen our social ties, and gather information from many different sources— remembering that the platforms are the real enemy.
This past Tuesday, I had a live conversation with Aidan Walker about the algorithmic gaze and the state of “content.” See our full discussion below.
Also, if you like my analyses of algorithmic media, please consider pre-ordering my book “Algospeak,” which examines the new reality of online language :)
I had to skim a few paragraphs that made me feel sick to my stomach. AI is just so dumb. I really wish more people saw and learned from Westworld and were able to just see all this for what it is - a form of control, utterly devoid of soul and meant to keep us stupid.
Are the weird “news” sites that pop on google also considered slop? I was a teacher and taught media literacy ro my students and I can’t believe how much just searching has changed in the few years since I left the profession.
This whole thing reminds me of Elsagate. But in 2017 everyone agreed that it was a bad thing for kids. It's like now that it has been normalized, and companies see the money they can make off it, there's not much regulatory pushback.