brainrot, buddhism, and radical meaning
I’m a child in kindergarten, repeating the word “eraser” over and over again. Eraser eraser eraser eraser eraser. The word sounds so silly! Eraser eraser eraser eraser. I’m forgetting what it meant in the first place.
I’m sitting in a Buddhist temple, listening to monks chant the Heart Sutra. Gate gate pāragate pārasaṃgate bodhi svāhā. Gate gate pāragate pārasaṃgate bodhi svāhā. Gate gate pāragate pārasaṃgate bodhi svāhā. The words technically have a meaning, but I find myself carried away by the rhythm instead. The mantra washes over me, connecting me to the present moment.
I’m scrolling on Twitter, seeing the same words show up in every post. Clavicular. Jestermaxxing. Framemogging. Clavicular. Jestermaxxing. Framemogging. Clavicular. Jestermaxxing. Framemogging. These terms also have a definition, but in practice they’re only funny because they’re funny.
Incel brainrot might not be the path to enlightenment, but there is an important connection between these examples. Any time we repeat a word too much, we become desensitized to its meaning. This phenomenon, called semantic satiation, causes us to attend to form over content. All that matters is how we experience an utterance.
As dictionary definitions dissolve, we make meaning for ourselves. The word “eraser” is more about how the consonants feel in my mouth. The Heart Sutra is a ritual inducing a meditative mindset. “Clavicular” is an ironic critique of the social media ecosystem.
The funny thing about algorithms quantifying and categorizing every aspect of society is that they can only do this on a legible, “objective” level. Measurements only work when we agree there’s something to measure, like the shared definition of a word.
Semantic satiation, meanwhile, is subjective. There’s no way to assign value to your individual sensation. Nobody can pinpoint whether you experience a word as beautiful or funny or sacred. “Brainrot,” as an aesthetic of nonsensical repetition, therefore subverts the algorithm. It creates an emotional meta-meaning that will always be absent from a vector space.
That which cannot be quantified cannot be commodified. Language is only profitable when we believe that meaning can be “captured” into words. A social media platform might be able to make money off the keyword “Clavicular,” but it is blind to the raw feeling of being inundated with the term. And yet there’s still a message happening on that embodied level.
If anything, it’s more authentic to feel meaning as a subjective, emotional thing. The kind of “objective meaning” we identify through definitions and embeddings actually emerges from our all-encompassing sensory context, expressed collectively through our interactions over time.
The Heart Sutra teaches that “form is emptiness, emptiness is form.” There is no fixed interpretation of language, but it is precisely in its unfixedness that language reveals its meaning. The beauty of semantic satiation is that it destroys the “containers” of denotation. Instead of using words to connect to something else, we connect to the words themselves—revealing that it was all form, and none of it.
What I’m consuming
This Harper’s article on “agentic” Silicon Valley culture by Sam Kriss
Harry Miller’s tutorials on walking, standing, and making shit happen
Tlön, Uqbar, Orbis Tertius by Jorge Luis Borges
Where I’m speaking
Austin, TX - March 13 presentation for SXSW
Columbus, OH - March 27 lecture for the OSU Linguistics Department


Hi Adam, thank you for this short essay. I have to say, though, I think this is the first time I actually disagree with your conclusions.
You argue that algorithmic measurement only works as long as there is something "objective" to the content, and that platforms become blind to the emotional message of words like "clavicular" (a word I'm honestly seeing for the first time, though I've had the misfortune of meeting Ballerina Capuccina and Ohio rizz), making it a blind spot in their commodification of everything, even though they can still serve and monetize it.
My perception of the algorithms has been that for a long time already, nothing is measured on the objective level anyways, and all that matters is screen time and engagement. And these are (at least I thought) understood to be caused much more by emotions, than the tangible value of the content. That's why we see polarising content on our feeds. And it's a well-known "feature" of TikTok that it's able to guess user's current mood just after a few scrolls, which, I suspect, has nothing to do with the objective meaning*. Thus, the user feeling "inundated" with a brainrot word is exactly something that the algorithm is trained to extract from our interaction signals.
This makes me conclude anything but what you say. Getting framemogged by clavicularity is not a radical act - it's exactly what the algorithm is supposed to do, and we're just losing the last bits of human touch by removing any curiosity and intellectual message from the content.
However, I would be surprised to see this omission in your work, which deals with social media algorithms all the time, so I suppose you must have considered what I'm saying - so I would be interested to hear why you came to your conclusion.
By the way, a fun (or distressing?) place where I really started seeing this "emotion > content" dynamic for myself are those "friends' likes" bubbles on Instagram. I noticed they really reflect on what terms I'm talking to different people and how I feel about them at the moment. And this is happening even though no "objective" signals on this can be found in the online world - only my engagement with content where I see their likes, which has increasingly little to do with the subject of the reels.
I love this post. I’ve been seeing a lot of (nonsense) discussions about AI essentially revealing the ability of language to generate itself from nothing but itself. These people argue that LLM’s are proof humans essentially do nothing more than unconscious token generation when speaking about things. I think you hit the nail on the head here by pointing to semantic satiation as evidence that language itself contains no meaning outside of what we see into it. AI is only able to consistently produce text we interpret as meaningful because it was and is trained on meaningful and CURRENT data. If we downloaded all the current LLMs right now and locked them away for a thousand + years and then pulled them out and tried to ask them a question, our use of language would have changed enough to render our question invalid to them, their response nonsensical to us, and or likely both. The idea that language is literally self propagating outside of a human context is baffling to me, but some people take it seriously. I’ll stop ranting. Great piece!
(PS - wouldn’t calling it “semantic cessation” make more sense than “satiation”? I said it incorrectly like this for the longest time and when I learned the actual phrase I was more confused).