I’m not on TikTok. I never have been, and at first I chalked this up to the fact that I crossed the threshold into “too old for new social media” sometime around the advent of Snapchat. But the more I heard about TikTok and its particular charms, the more I dug in my heels as a sort of half-assed protest maneuver.
I’ve talked a lot about ~*~the algorithm~*~ in this space. I dislike being “handled” in any capacity, but the idea that there’s a system observing me, collecting data, and then shaping what I see based on its interpretation thereof is intensely uncomfortable. And I realize that TikTok isn’t the only platform that does this, but it’s the first to really drive home the level of surgical fine-tuning that a recommendation engine can achieve to compel attention and engagement.
It reminds me too much of the ways that computerized gambling can drive addiction (cw for suicide). That Atlantic article uses some outdated language and concepts around addiction that we know now are inaccurate, but its investigation into the exploitative nature of gambling machine algorithms stuck with me. It fundamentally reframed my understanding of recommendation engines, digital advertising, data privacy… a lot of stuff. (I’ve got a LOT of feelings about sports media’s recent embrace of betting that ties in with this, but that’s not a today thing.)
TikTok has zeroed in on the science of The Algorithm in a way that a popular social media platform hadn’t done before, and other platforms like Instagram and YouTube scrambled to catch up. Perhaps one of the most visible effects of all of this is a particular self-censoring vocabulary that the New York Times called ‘algospeak’ (so, you know, take that terminology with a grain of salt).
On the assumption that TikTok’s algorithm would punish their videos for including “harmful” content, creators started using alternate spellings and homophonic substitutions to get their messages across—like “seggs” for “sex,” “le$bian” instead of “lesbian,” and perhaps the most notable example, “unalive” for “kill.” TikTok claims that this isn’t how its content moderation works, but creators insist that their viewing numbers tell a different story.
Lately, I’ve been seeing this kind of algospeak spread to the video platforms I do use. Just this morning, I spotted a YouTube comment that talked about “jérking off.” I assume this is a natural dissemination by people who primarily use TikTok and just type this way out of habit in all social media spaces, but it does make me uncomfortable to see—even the Times acknowledges that it’s pretty dystopian.
In the same way that people in every generation forever have gotten their underpants in a twist about language change, you see a lot of frustration directed at Gen Z and Gen Alpha for using these euphemisms—either for using them outside of TikTok where they belong, or for being so cowardly as to use them at all. Inevitably, one of the first comparisons to crop up is Newspeak, from George Orwell’s 1984. (Newspeak accusations also get leveled at the “PC police” whenever language changes around such “woke” topics as identity and human rights.)
Newspeak is the language of Oceania, the fictional totalitarian supercountry at the core of Orwell’s novel. It’s a constructed language, both in a real-world sense and within the fiction. It’s designed to be simplistic and restrictive, to limit the ability to communicate abstractions and tamp down critical thinking. The argument goes that by circumscribing communication in this way, the state might curb citizens’ ability to think in complex ways about their politics, their rights, and their beliefs. Without these complex ideas, there’s less risk of unrest and sedition.
Orwell elaborates on the theme in his essay, “Politics and the English Language:”
Orthodoxy, of whatever colour, seems to demand a lifeless, imitative style. The political dialects to be found in pamphlets, leading articles, manifestos, White Papers and the speeches of Under-Secretaries do, of course, vary from party to party, but they are all alike in that one almost never finds in them a fresh, vivid, home-made turn of speech. When one watches some tired hack on the platform mechanically repeating the familiar phrases… one often has a curious feeling that one is not watching a live human being but some kind of dummy… A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself. If the speech he is making is one that he is accustomed to make over and over again, he may be almost unconscious of what he is saying, as one is when one utters the responses in church. And this reduced state of consciousness, if not indispensable, is at any rate favourable to political conformity.
Orwell argues that the overused, canned phrases that he sees as dominating the writing of the time are designed to placate readers about the truths of war that are otherwise indefensible. Orwell wrote this essay in 1949, after World War II. During the war, his wife had been a member of the Censorship Department of Britain’s Ministry of Information, and Orwell himself had been an enthusiastic government propagandist with the BBC. He knew what he was talking about, is what I’m saying, and it’s clear that it had a profound effect on his understanding of language.
“Politics and the English Language” is an important read, but stretches of it are also… a bit silly. Orwell begins the essay by talking about the “decadence” of the English language, positioning the problems he currently sees with it as a “decline”— one more example of the age-old “kids these days” refrain. He also gets very specific about how to communicate political realities “correctly” in English, so as not to confuse the issue with overblown prose or wash meanings out through oversimplification.
Needless to say, Orwell would probably have a lot of opinions about algospeak. But for those who are tempted to compare algospeak to Newspeak, I think there’s an important distinction to draw: namely, that Newspeak is a top-down imposition by a restrictive government. It’s a normative system, telling its people precisely what they can say.
Algospeak, on the other hand, is rebellious. It’s an attempt to circumvent rules (perceived or real) about what people on TikTok can’t say, but are finding ways to say anyway. And in that sense, the frustration/anger/cringe/whatever that gets directed at Gens Z and Alpha is misdirected. Algospeak is uncomfortable not because it makes the kids sound dumb, but because it points directly at the dystopian restrictions imposed by social media platforms.
I also think that algospeak highlights a fundamental flaw in the way we discuss this kind of language restriction: linguistic relativity, sometimes called the Sapir-Whorf hypothesis. Linguistic relativity is the idea that the structure of a language shapes the way its speakers understand the world, and even how they think. One of the classic studies on the question looked at whether people perceived colors differently based on how their language classified colors, but you can also see the concept in action in, for example, the movie Arrival, where learning an alien language rewires Amy Adams’ brain in impossible ways.
Linguistic determinism, the “strong” form of linguistic relativity, was in vogue in Britain right around the time that Orwell was writing for the government. Some linguists at the time held that the characteristics of your native language fully determine your thought processes—so you can see where Orwell might have got it from, and why he might have been so emphatic about a “correct” use of English. Under this theory, muddying language literally muddies the very ability to think—which would be pretty devastating to a democratic political system that relies on the wisdom of its citizens.
It’s a tidy and compelling theory, which is probably why it’s stuck with English speakers long enough to get us all hot and bothered about the way kids write on TikTok. The trouble is that it’s… kind of bullshit? Linguists have been digging into this idea for a century at this point, and the evidence—as it is with so many things—is mixed. Language absolutely does influence cognition in measurable ways, but the effects are flexible and far from universal. Certainly not strong enough to, for example, stamp out the concept of political revolution, or erase the idea of queerness.
So while it’s pretty dark to see people on the internet trying to sneak mentions of jérking off past the possibly-imaginary censors, it’s unlikely that we’re all going to forget it exists anytime soon. But that’s not to say that censorship won’t have a chilling effect for Gen Alpha when they’re deciding whether to bring up difficult topics that deserve to be talked about, like sexuality, mental health, and trauma. And that restrictive environment is something that should be top of mind as our society tackles social media’s role, particularly considering that young people have fewer and fewer forums for open discussion. [No, this is not a “FrEe SpEeCh” argument, get your dirty Elon Musk fingers off my writing.]
The kids are finding ways to talk, though, even in the face of hurdles. And language is a persistent thing, like mint plants, or glitter—once it’s out, there’s no getting rid of it. So while moderation in online spaces is always going to be important, it’s probably just as important to focus on creating healthy communities and disseminating good information. You know, all the bits that are harder than a word filter.
Phew! Long one this week. If you like my ramblings, please pass them along to more people you think would like them! And if you feel so inspired, chuck me a few dollars, so that I can exchange them for the goods and services that keep my ramblings possible.
Have a great week, y’all, and I’ll see you next time.
I disagree, I don't think that algo-speak is a form of rebellion, it is a form of submission. They submit to the content guidelines by "circumventing" them, except, they haven't circumvented them at all. Orwell used terms like "unwarm" for "cold" and "unperson" for people who were disappeared by the government. These inextricably remove the negative connotations we have associated with those words and reduce the range of thought. The term "unalive" will never have the same impact as "kill" or "murder" or "die" because "unalive" doesn't carry any inherited, codified negative connotations that we have been socialised to understand. The only thing the term "unalive" does is numb the true, socially conditioned meaning of the word that extends deeper than the dictionary definition.
This applies especially for "pew-pew" instead of gun. The sentence "He shot him with a pew-pew" is outright silly and means almost nothing compared to "He shot him with a gun." And this isn't confined to TikTok, I am seeing more and more uses of this newspeak trickling into other media, and it will only increase. I listened to a podcast the other day about the Black Dahlia case, and the podcaster said "sewer-slide" instead of suicide. Imagine your grandmother committing suicide and devastating your family, only to hear podcasters say she committed "sewer-slide", how unbelievably reductive and insensitive.