Discussion about this post

User's avatar
Matěj Volf's avatar

Hi Adam, thank you for this short essay. I have to say, though, I think this is the first time I actually disagree with your conclusions.

You argue that algorithmic measurement only works as long as there is something "objective" to the content, and that platforms become blind to the emotional message of words like "clavicular" (a word I'm honestly seeing for the first time, though I've had the misfortune of meeting Ballerina Capuccina and Ohio rizz), making it a blind spot in their commodification of everything, even though they can still serve and monetize it.

My perception of the algorithms has been that for a long time already, nothing is measured on the objective level anyways, and all that matters is screen time and engagement. And these are (at least I thought) understood to be caused much more by emotions, than the tangible value of the content. That's why we see polarising content on our feeds. And it's a well-known "feature" of TikTok that it's able to guess user's current mood just after a few scrolls, which, I suspect, has nothing to do with the objective meaning*. Thus, the user feeling "inundated" with a brainrot word is exactly something that the algorithm is trained to extract from our interaction signals.

This makes me conclude anything but what you say. Getting framemogged by clavicularity is not a radical act - it's exactly what the algorithm is supposed to do, and we're just losing the last bits of human touch by removing any curiosity and intellectual message from the content.

However, I would be surprised to see this omission in your work, which deals with social media algorithms all the time, so I suppose you must have considered what I'm saying - so I would be interested to hear why you came to your conclusion.

By the way, a fun (or distressing?) place where I really started seeing this "emotion > content" dynamic for myself are those "friends' likes" bubbles on Instagram. I noticed they really reflect on what terms I'm talking to different people and how I feel about them at the moment. And this is happening even though no "objective" signals on this can be found in the online world - only my engagement with content where I see their likes, which has increasingly little to do with the subject of the reels.

Tim C. K. Ice's avatar

I love this post. I’ve been seeing a lot of (nonsense) discussions about AI essentially revealing the ability of language to generate itself from nothing but itself. These people argue that LLM’s are proof humans essentially do nothing more than unconscious token generation when speaking about things. I think you hit the nail on the head here by pointing to semantic satiation as evidence that language itself contains no meaning outside of what we see into it. AI is only able to consistently produce text we interpret as meaningful because it was and is trained on meaningful and CURRENT data. If we downloaded all the current LLMs right now and locked them away for a thousand + years and then pulled them out and tried to ask them a question, our use of language would have changed enough to render our question invalid to them, their response nonsensical to us, and or likely both. The idea that language is literally self propagating outside of a human context is baffling to me, but some people take it seriously. I’ll stop ranting. Great piece!

(PS - wouldn’t calling it “semantic cessation” make more sense than “satiation”? I said it incorrectly like this for the longest time and when I learned the actual phrase I was more confused).

12 more comments...

No posts

Ready for more?