logo
#

Latest news with #4chan

How incel language infected the mainstream internet — and brought its toxicity with it
How incel language infected the mainstream internet — and brought its toxicity with it

The Verge

time2 days ago

  • General
  • The Verge

How incel language infected the mainstream internet — and brought its toxicity with it

This excerpt from Adam Aleksic's Algospeak: How Social Media Is Transforming the Future of Language has been abridged for online publication. The book is out on July 15th. The modern-­day incel is entirely an invention of the twenty-­first century. Before the internet, lonely men simply didn't have a way to gather and share ideas. That all began to change in 1997, when a Canadian student started a website called Alana's Involuntary Celibacy Project to connect with others over her shared lack of sex. As the name implies, the site wasn't a place for just straight men; rather, it was used by people of any gender or sexual orientation. In subsequent years, that initial 'incel' community then dispersed to several other websites. These were more male dominated and less moderated, meaning that increasingly misogynistic discussion was able to take root. The largest of these forums, 4chan, doubled as a gathering place for right-­wing extremists, whose ideas began to fuse with those of the incels. In their world-view, the sexual hierarchy was dominated by an elite group of Chads (highly attractive men), who could rely on their good looks as a form of 'sexual market value' to seduce women at the expense of betas (average men who exchange loyalty to Chads for their romantic leftovers). At the very bottom rung were the incels, who believed they were unable to have sex because of their appearance. Acceptance of the lookism philosophy —­ known as getting blackpilled — meant adopting very specific slang and ideas. For example, a Chad was understood to always 'mog' (dominate) and 'cuck' (emasculate) a beta, but the beta could attempt to improve their status through 'looksmaxxing' (enhancing their physical appearance). This might take the form of working out (gymmaxxing) or even seeking physical modifications through 'Surgerymaxxing.' With this cynical, deterministic cognitive frame dominating 4chan's discussion boards, the modern 'blackpill' began in earnest. Notably, 4chan didn't have any user accounts. Every poster was anonymous, meaning that the only way to differentiate yourself as an experienced user was to demonstrate a performative proficiency in shared slang. This unique pressure to show a sense of in- group belonging ended up giving us numerous foundational internet words, such as 'troll,' 'dank,' 'shitpost,' and 'rickroll.' Using these words was an important way to show that you weren't a 'normie' on the website. Because they had wide applicability, they eventually spread beyond the site. In the same way, most of the highly specific incel vocabulary was built up by 4chan extremists to match their burgeoning ideology. Words like 'mogging,' 'cucked,' and 'maxxing' became metalinguistic indicators that the anonymous poster was truly a blackpilled member of the community and not some random outsider. You needed to demonstrate a certain level of prerequisite knowledge to truly fit in. Beyond the technological catalyst of 4chan's user interface, incels have long faced a self-­ imposed social need to adopt new slang to prove their status. Those within the community fight a constant battle to prove their 'purity' as incels and avoid being labeled as 'fakecels' or 'volcels' (voluntary celibates). Even within the deepest echelons of the incel filter bubble, many believe that most of their peers still have potential to 'ascend' to beta status through looksmaxxing, moving location, or accumulating wealth. Only the bottom 1 percent of the population are truecels — incels with unchangeably unattractive features and no hope of ascension. In the online space, these truecels are able to dominate the discussion due to their purer status. Within the incel community itself, language serves the same function as language in a cult: It's a recruitment tool creating an 'us versus them' mentality. Since incel vocabulary is used to mark 'correct' blackpill philosophy, the incel feels alienated from normies—­ family and friends who don't use the language. Meanwhile, truecel rhetoric pushes recruits to accept more extremist beliefs, since those ideas are associated with higher social status within the community. Those who use the language sound experienced, appearing to understand the ideology well. While extreme, the basic structure of the incel filter bubble mirrors all other filter bubbles online. Those who are further in the in-­ group are more likely to dominate discourse, creating and spreading words for those on the periphery. As users familiarize themselves with the group vocabulary, they identify more with that group, and more readily adopt language to fit shared social needs. The basic structure of the incel filter bubble mirrors all other filter bubbles online. Those who are further in the in-­ group are more likely to dominate discourse, creating and spreading words for those on the periphery. I would argue that, if anything, the incel example is very important to understand, for it has probably contributed more to the development of 'modern slang' than any other online community. It's precisely because of their radicalized and insular echo chamber that they've created so much language and have many more avenues to influence the mainstream. It is because of their extreme views that their ideas are so easily spread through memes. We can, in fact, use the spread of incel ideas as a case study to examine how memes carry information across social media platforms. Real incels never had access to algorithmic recommendations, since their ideology was too unpalatable and subject to content moderation. So how did their concepts and language move from website to website until eventually arriving, in diluted form, on our social media feeds? Let's start where the philosophy began in earnest: 4chan. Despite the forum's early importance, it remained a place where incels mixed with normies. The Incels Wiki page for /r9k/, their main discussion board on 4chan, calls it a 'pseudo-incelospherian' space: Although it was a medium 'for some genuine incel discussion,' it was never purely an incel forum, and 'also served as a place for people to pretend to be incel' and troll actual truecels. Seeking a more insular and supportive community in the mid-­ 2010s, the incel subculture largely turned to Reddit, where subreddits like r/Incels were able to accrue tens of thousands of blackpilled followers. From there, they slowly began pushing their philosophy in other subreddits. Forums like these were fruitful recruiting grounds, but the incels found their greatest success on 'rate me' subreddits, where people would post pictures of themselves and ask for feedback. Here, incels were able to promote a more accessible version of their philosophy by disguising looksmaxxing language as helpful suggestions. Posters were evaluated on pseudoscientific lookism beauty standards like 'interocular distance,' 'canthal tilt,' and 'hunter eyes.' They were encouraged to improve their facial structure through 'mewing' and jaw surgery so that they could 'mog' others. If they were interested in exploring further, the blackpill was waiting around the corner. Even once the incel subreddits were eventually shut down by Reddit, forums like r/RateMe continued to normalize incel jargon, making it easier to both put stock in it and parody it. In the same way that my Discord server jokingly used incel language, jokes about mogging and canthal tilts began to show up in 2021 across Instagram and Twitter, in memes that eventually became viral through TikTok and Instagram Reels. Ironically, the first people to bring looksmaxxing to TikTok appear to have been women, who unknowingly began repurposing incel concepts from the early 'rate me' subreddits. Beauty influencers on #GirlTok would demonstrate how to use canthal tilt to put on eyeliner, or post video filters rating themselves on metrics like forehead size and interocular distance. Eventually, people began picking up on the phrenological absurdity of these ideas and turned them into more memes. The deeper people poked into the underlying philosophy, the more the jokes multiplied, and words like 'pilled' and '-­ maxxing' were fully trending by late 2023. Most people thought that the concepts were funny and went on to spread them; those who knew the story and were offended by it also helped the terms spread through the ragebait cycle of attention. Once algorithms got involved, the incel terms were amplified by the online Matthew effect: a phenomenon where content that is slightly better at grabbing your attention performs exponentially better on social media. As their memes were recombined into other 'phrasal templates' and caught on as comedic references, they were eventually able to reach mainstream popularity. Many words also spawned their own spin-­ off memes. Starting in 2021, for example, the term 'sigma' began going viral as an ironic reference to the incel hierarchy of alphas and betas. In this particular joke, a sigma was nominally equal to a Chad, but opted to live outside the normal social structure of their own volition. The phrase started out as a genuinely idolized position within the incelosphere, but was then blown up through memes like the 'Rizzler song,' which contained the lyric 'I just wanna be your sigma.' A lot of subsequent 'brainrot' content focused on similar incel classifications, like an analysis of the power dynamics between dancers in a '­ TikTok Rizz Party' or viral 'sigma face tutorials.' By this point, the words were out of the incels' hands. The community never had the opportunity to gather on major social media apps once their subreddits were shut down, and instead had moved to Discord servers or more specific, hard-to-­find forums online. There have been past problems with algorithmic filter bubbles leading to extremism: ISIS infamously used YouTube as a recruitment tool in the early 2010s, and the QAnon movement spread in part thanks to Facebook echo chambers. Thankfully, the big platforms had cracked down on more obvious threats by the time incel slang became mainstream. If you look up 'incel' on TikTok, for example, it redirects you to a page warning you that your search term is associated with hateful content. Nevertheless, it's fascinating how far incel humor has reached. One of the most common meme templates on the internet is a crudely drawn comparison of a 'Chad walk versus virgin stride.' In the original version, the characters are labeled like diagrams in a biology textbook, with annotations pointing out why the Chad's behavior mogs the virgin. Another widely shared format contrasts the opinion of a crying loser character with that of a confident Chad. Both templates perpetuate incel ideas about social hierarchies, but to the uninitiated they're simply funny conduits for categorizing ideas. These memes—­ and many others, from 'Pepe the Frog' to the 'Gigachad'—­ started on incel-­ associated 4chan boards before reaching greater popularity on other websites for their easy applicability to everyday situations. By now, we know that the dissemination of incel memes across platforms points to how fringe ideas can become mainstream, and that algorithms can perpetuate dangerous concepts in the name of engagement optimization. The lookism concepts from r/RateMe, including jawline angle, eye distance, and facial symmetry, are eugenics-­ based talking points that were already regarded as pseudoscientific by the nineteenth century. Now, with beauty influencers making content about those metrics, it feels as if we've reverted to social Darwinist ideas about skull measurement. Phrenological theories barely scratch the surface of how incel memes open the door to eugenics. Since much of the early incel community was heavily influenced by the alt-­ right community on 4chan, they've adopted a lot of extreme ideas about interracial relationships. According to lookism philosophy, Asian men are considered the least sexually desirable, and many 'truecels' self-identify as 'ricecels' or 'currycels' as reasons for their incel­ dom. These men point to WMAF (white male/Asian female) relationships as a principal cause of their virginity—­ objectifying the women in these situations and depriving them of their agency to make their own dating decisions. The term 'brainrot,' now used to describe a genre of Gen Alpha humor, likely also came from incel circles, which used the expression to describe the perceived decline in intelligence resulting from their lack of social interaction. Incel slang is marked by its deeply negative views ­toward society, and these ideas frequently resonate with younger generations who are similarly pessimistic about the present. In the early 2020s, for instance, the catchphrase 'it's over' began making the rounds as a dejected reaction to an adverse situation. Partially a joke, partially a genuine expression of hopelessness, it was buoyed in popularity by incels, who had been using the phrase since it began making the rounds on 4chan. In her 2024 book, The Age of Magical Overthinking, Amanda Montell identifies the rise of 'doomslang' —­ dystopian or detached jargon mostly used by younger people. Hyperbolically negative phrases like 'everything sucks' and 'I want to kill myself ' have become shockingly commonplace, and everyday actions like lying in bed on your phone are bleakly described as 'dissociating,' 'doomscrolling,' or 'bedrotting.' This kind of language is especially common among incels, who were using phrases like 'LDAR' ('lay down and rot') before 'bedrotting' ever became a thing. The modern doomslang phenomenon appears to have evolved simultaneously with incel-speak, in some cases even being influenced by the latter. One of the stock characters in the Chad memes, known as the doomer, emerged as a way to voice the (frequently incel) dissociated perspective on 4chan, and eventually spread beyond those origins like all the other 4chan memes. Today, I regularly hear my friends calling each other 'doomers,' as well as using other depressive incel words like 'cope,' 'ropemaxx' (an algospeak replacement for 'commit suicide'), and 'wagecuck' (someone who works a mindless nine-­ to-­ five job). Meanwhile, the term 'brainrot,' now used to describe a genre of Gen Alpha humor, likely also came from incel circles, which used the expression to describe the perceived decline in intelligence resulting from their lack of social interaction. These terms spread partially because the algorithm thrives on negativity and partially because they confirmed our existing cultural outlooks. Phrases like 'doomer' and 'it's over' spoke to our disconnected reality, while 'brainrot' held a mirror up to our online addictions and 'wagecuck' reflected our growing disenchantment with the American dream. And since apocalyptic statements are good for engagement, the phrases eventually became a part of the zeitgeist, emergently reinforcing our pessimistic points of view. Words are memes, and memes are trends, but all are also ideas. While it's difficult to determine for certain the actual impact of incel vocabulary on our culture, the incels themselves certainly believe they've effectively spread their ideas. On incel sites, longtime truecels use the terms 'Newgen' and 'Tiktokcel' to describe those who only recently joined their forums from short-form video platforms. The Incels Wiki lists the looks-maxxing trend on TikTok as a primary driver of this recent incel influx, meaning that the meme pipeline has had at least some efficacy in making the blackpill more accessible. If incel memes are so dangerous, how were they able to spread so easily? It all comes down to the very blurred line between comedy and authenticity. To most of us, these memes were just funny. We weren't blackpilled by incel language, and we didn't perpetuate them to promote lookism or racism or sexism. Instead, we used them as a form of dark humor, flipping the script on the incel community to ultimately satirize them. When we repurposed the 'Chad versus virgin' meme template, the incels became the butt of the joke. When my friends and I used words like '-­ maxxing' and 'pilled,' we established a new in- group: the community of young people on social media. The terms were silly jokes to connect over, signaling that we knew something exclusive about popular culture. Since everybody wanted to feel like part of the in-­ group, the words spread, taking them out of the incels' hands and robbing them of their original power as they simply became 'brainrot' words. I suspect that the vast majority of people sharing the memes probably didn't even know they came from incels. The most disturbing concepts—­ like calling women 'foids' or Asian people 'currycels'—­ remained in-­ group, because these are far too offensive to become mainstream. But for other concepts such as mewing, there was simply no reason to assume the underlying idea would be problematic until you ­ really looked into it. If incel memes are so dangerous, how were they able to spread so easily? It all comes down to the very blurred line between comedy and authenticity. Some time after the serious philosophy was turned into a joke, though, it began to be treated seriously again by some of those out of the loop. At least some of the beauty influencers talking about hunter eyes and interocular distance misinterpreted the ironic context of the lookism words and spread them as genuine beauty standards, which spawned more jokes, leading to more serious reinterpretations. After the jokes about canthal tilts and mewing went viral, we began seeing increases in canthal tilt eyeliner demonstrations and Google searches for 'jaw surgery.' On the one hand, that just made the jokes funnier; on the other, incel ideas about attractiveness became more culturally relevant. Again, how did this happen? Well, it's famously difficult to discern tone on the internet, to the point where there's an adage about it called Poe's law: 'Any sarcastic expression of extreme views can be mistaken for a sincere expression of those views,' and vice versa. Poe's law explains how dangerous ideas spread as memes. If something is meant genuinely, but it is also crazy enough to be interpreted as a joke, people may reward it with 'likes' and other engagement because they find it funny. Meanwhile, if something ironic is interpreted as genuine, people will be offended by it, which then also drives engagement as a form of ragebait. Either way, 'edgy' humor is able to worm its way into the mainstream via the algorithm. Incels themselves often introduce serious topics as jokes, which can normalize their idea until it is revealed in its entirety. You start out laughing at how funny a 'walkpilled cardiomaxxer' meme is, and then all of a sudden your For You page is dominated by incel memes, bringing you closer to the ideology. You can see this in how their language has spread. I think it's pretty clear that the word 'Chad' started out as a humorous archetype, but at a certain point incels began using it as a genuine classification to parallel the 'beta' and 'incel' social tiers. Then those tiers appeared so ridiculous to outsiders that they were able to spread as memes beyond their serious usage. Now we have people using the 'Chad' and 'virgin' characters as if they were stock characters in a new commedia dell'arte. Poe's law has created a dangerous game of hopscotch. We're jumping between irony and reality, but we're not always sure where those lines are. Interpreting words comedically helps the algorithm spread them as memes and trends, but then interpreting them seriously manifests their negative effects. From ALGOSPEAK: How Social Media Is Transforming the Future of Language by Adam Aleksic, to be published by Alfred A. Knopf, a division of Penguin Random House, LLC, on July 15, 2025. Copyright (c) 2025 by Adam Aleksic.

Wordpilled slangmaxxing: how incel language infected the mainstream internet — and brought its toxicity with it
Wordpilled slangmaxxing: how incel language infected the mainstream internet — and brought its toxicity with it

The Verge

time2 days ago

  • General
  • The Verge

Wordpilled slangmaxxing: how incel language infected the mainstream internet — and brought its toxicity with it

This excerpt from Adam Aleksic's Algospeak: How Social Media Is Transforming the Future of Language has been abridged for online publication. The book is out on July 15th. The modern-­day incel is entirely an invention of the twenty-­first century. Before the internet, lonely men simply didn't have a way to gather and share ideas. That all began to change in 1997, when a Canadian student started a website called Alana's Involuntary Celibacy Project to connect with others over her shared lack of sex. As the name implies, the site wasn't a place for just straight men; rather, it was used by people of any gender or sexual orientation. In subsequent years, that initial 'incel' community then dispersed to several other websites. These were more male dominated and less moderated, meaning that increasingly misogynistic discussion was able to take root. The largest of these forums, 4chan, doubled as a gathering place for right-­wing extremists, whose ideas began to fuse with those of the incels. In their world-view, the sexual hierarchy was dominated by an elite group of Chads (highly attractive men), who could rely on their good looks as a form of 'sexual market value' to seduce women at the expense of betas (average men who exchange loyalty to Chads for their romantic leftovers). At the very bottom rung were the incels, who believed they were unable to have sex because of their appearance. Acceptance of the lookism philosophy —­ known as getting blackpilled — meant adopting very specific slang and ideas. For example, a Chad was understood to always 'mog' (dominate) and 'cuck' (emasculate) a beta, but the beta could attempt to improve their status through 'looksmaxxing' (enhancing their physical appearance). This might take the form of working out (gymmaxxing) or even seeking physical modifications through 'Surgerymaxxing.' With this cynical, deterministic cognitive frame dominating 4chan's discussion boards, the modern 'blackpill' began in earnest. Notably, 4chan didn't have any user accounts. Every poster was anonymous, meaning that the only way to differentiate yourself as an experienced user was to demonstrate a performative proficiency in shared slang. This unique pressure to show a sense of in- group belonging ended up giving us numerous foundational internet words, such as 'troll,' 'dank,' 'shitpost,' and 'rickroll.' Using these words was an important way to show that you weren't a 'normie' on the website. Because they had wide applicability, they eventually spread beyond the site. In the same way, most of the highly specific incel vocabulary was built up by 4chan extremists to match their burgeoning ideology. Words like 'mogging,' 'cucked,' and 'maxxing' became metalinguistic indicators that the anonymous poster was truly a blackpilled member of the community and not some random outsider. You needed to demonstrate a certain level of prerequisite knowledge to truly fit in. Beyond the technological catalyst of 4chan's user interface, incels have long faced a self-­ imposed social need to adopt new slang to prove their status. Those within the community fight a constant battle to prove their 'purity' as incels and avoid being labeled as 'fakecels' or 'volcels' (voluntary celibates). Even within the deepest echelons of the incel filter bubble, many believe that most of their peers still have potential to 'ascend' to beta status through looksmaxxing, moving location, or accumulating wealth. Only the bottom 1 percent of the population are truecels — incels with unchangeably unattractive features and no hope of ascension. In the online space, these truecels are able to dominate the discussion due to their purer status. Within the incel community itself, language serves the same function as language in a cult: It's a recruitment tool creating an 'us versus them' mentality. Since incel vocabulary is used to mark 'correct' blackpill philosophy, the incel feels alienated from normies—­ family and friends who don't use the language. Meanwhile, truecel rhetoric pushes recruits to accept more extremist beliefs, since those ideas are associated with higher social status within the community. Those who use the language sound experienced, appearing to understand the ideology well. While extreme, the basic structure of the incel filter bubble mirrors all other filter bubbles online. Those who are further in the in-­ group are more likely to dominate discourse, creating and spreading words for those on the periphery. As users familiarize themselves with the group vocabulary, they identify more with that group, and more readily adopt language to fit shared social needs. The basic structure of the incel filter bubble mirrors all other filter bubbles online. Those who are further in the in-­ group are more likely to dominate discourse, creating and spreading words for those on the periphery. I would argue that, if anything, the incel example is very important to understand, for it has probably contributed more to the development of 'modern slang' than any other online community. It's precisely because of their radicalized and insular echo chamber that they've created so much language and have many more avenues to influence the mainstream. It is because of their extreme views that their ideas are so easily spread through memes. We can, in fact, use the spread of incel ideas as a case study to examine how memes carry information across social media platforms. Real incels never had access to algorithmic recommendations, since their ideology was too unpalatable and subject to content moderation. So how did their concepts and language move from website to website until eventually arriving, in diluted form, on our social media feeds? Let's start where the philosophy began in earnest: 4chan. Despite the forum's early importance, it remained a place where incels mixed with normies. The Incels Wiki page for /r9k/, their main discussion board on 4chan, calls it a 'pseudo-incelospherian' space: Although it was a medium 'for some genuine incel discussion,' it was never purely an incel forum, and 'also served as a place for people to pretend to be incel' and troll actual truecels. Seeking a more insular and supportive community in the mid-­ 2010s, the incel subculture largely turned to Reddit, where subreddits like r/Incels were able to accrue tens of thousands of blackpilled followers. From there, they slowly began pushing their philosophy in other subreddits. Forums like these were fruitful recruiting grounds, but the incels found their greatest success on 'rate me' subreddits, where people would post pictures of themselves and ask for feedback. Here, incels were able to promote a more accessible version of their philosophy by disguising looksmaxxing language as helpful suggestions. Posters were evaluated on pseudoscientific lookism beauty standards like 'interocular distance,' 'canthal tilt,' and 'hunter eyes.' They were encouraged to improve their facial structure through 'mewing' and jaw surgery so that they could 'mog' others. If they were interested in exploring further, the blackpill was waiting around the corner. Even once the incel subreddits were eventually shut down by Reddit, forums like r/RateMe continued to normalize incel jargon, making it easier to both put stock in it and parody it. In the same way that my Discord server jokingly used incel language, jokes about mogging and canthal tilts began to show up in 2021 across Instagram and Twitter, in memes that eventually became viral through TikTok and Instagram Reels. Ironically, the first people to bring looksmaxxing to TikTok appear to have been women, who unknowingly began repurposing incel concepts from the early 'rate me' subreddits. Beauty influencers on #GirlTok would demonstrate how to use canthal tilt to put on eyeliner, or post video filters rating themselves on metrics like forehead size and interocular distance. Eventually, people began picking up on the phrenological absurdity of these ideas and turned them into more memes. The deeper people poked into the underlying philosophy, the more the jokes multiplied, and words like 'pilled' and '-­ maxxing' were fully trending by late 2023. Most people thought that the concepts were funny and went on to spread them; those who knew the story and were offended by it also helped the terms spread through the ragebait cycle of attention. Once algorithms got involved, the incel terms were amplified by the online Matthew effect: a phenomenon where content that is slightly better at grabbing your attention performs exponentially better on social media. As their memes were recombined into other 'phrasal templates' and caught on as comedic references, they were eventually able to reach mainstream popularity. Many words also spawned their own spin-­ off memes. Starting in 2021, for example, the term 'sigma' began going viral as an ironic reference to the incel hierarchy of alphas and betas. In this particular joke, a sigma was nominally equal to a Chad, but opted to live outside the normal social structure of their own volition. The phrase started out as a genuinely idolized position within the incelosphere, but was then blown up through memes like the 'Rizzler song,' which contained the lyric 'I just wanna be your sigma.' A lot of subsequent 'brainrot' content focused on similar incel classifications, like an analysis of the power dynamics between dancers in a '­ TikTok Rizz Party' or viral 'sigma face tutorials.' By this point, the words were out of the incels' hands. The community never had the opportunity to gather on major social media apps once their subreddits were shut down, and instead had moved to Discord servers or more specific, hard-to-­find forums online. There have been past problems with algorithmic filter bubbles leading to extremism: ISIS infamously used YouTube as a recruitment tool in the early 2010s, and the QAnon movement spread in part thanks to Facebook echo chambers. Thankfully, the big platforms had cracked down on more obvious threats by the time incel slang became mainstream. If you look up 'incel' on TikTok, for example, it redirects you to a page warning you that your search term is associated with hateful content. Nevertheless, it's fascinating how far incel humor has reached. One of the most common meme templates on the internet is a crudely drawn comparison of a 'Chad walk versus virgin stride.' In the original version, the characters are labeled like diagrams in a biology textbook, with annotations pointing out why the Chad's behavior mogs the virgin. Another widely shared format contrasts the opinion of a crying loser character with that of a confident Chad. Both templates perpetuate incel ideas about social hierarchies, but to the uninitiated they're simply funny conduits for categorizing ideas. These memes—­ and many others, from 'Pepe the Frog' to the 'Gigachad'—­ started on incel-­ associated 4chan boards before reaching greater popularity on other websites for their easy applicability to everyday situations. By now, we know that the dissemination of incel memes across platforms points to how fringe ideas can become mainstream, and that algorithms can perpetuate dangerous concepts in the name of engagement optimization. The lookism concepts from r/RateMe, including jawline angle, eye distance, and facial symmetry, are eugenics-­ based talking points that were already regarded as pseudoscientific by the nineteenth century. Now, with beauty influencers making content about those metrics, it feels as if we've reverted to social Darwinist ideas about skull measurement. Phrenological theories barely scratch the surface of how incel memes open the door to eugenics. Since much of the early incel community was heavily influenced by the alt-­ right community on 4chan, they've adopted a lot of extreme ideas about interracial relationships. According to lookism philosophy, Asian men are considered the least sexually desirable, and many 'truecels' self-identify as 'ricecels' or 'currycels' as reasons for their incel­ dom. These men point to WMAF (white male/Asian female) relationships as a principal cause of their virginity—­ objectifying the women in these situations and depriving them of their agency to make their own dating decisions. The term 'brainrot,' now used to describe a genre of Gen Alpha humor, likely also came from incel circles, which used the expression to describe the perceived decline in intelligence resulting from their lack of social interaction. Incel slang is marked by its deeply negative views ­toward society, and these ideas frequently resonate with younger generations who are similarly pessimistic about the present. In the early 2020s, for instance, the catchphrase 'it's over' began making the rounds as a dejected reaction to an adverse situation. Partially a joke, partially a genuine expression of hopelessness, it was buoyed in popularity by incels, who had been using the phrase since it began making the rounds on 4chan. In her 2024 book, The Age of Magical Overthinking, Amanda Montell identifies the rise of 'doomslang' —­ dystopian or detached jargon mostly used by younger people. Hyperbolically negative phrases like 'everything sucks' and 'I want to kill myself ' have become shockingly commonplace, and everyday actions like lying in bed on your phone are bleakly described as 'dissociating,' 'doomscrolling,' or 'bedrotting.' This kind of language is especially common among incels, who were using phrases like 'LDAR' ('lay down and rot') before 'bedrotting' ever became a thing. The modern doomslang phenomenon appears to have evolved simultaneously with incel-speak, in some cases even being influenced by the latter. One of the stock characters in the Chad memes, known as the doomer, emerged as a way to voice the (frequently incel) dissociated perspective on 4chan, and eventually spread beyond those origins like all the other 4chan memes. Today, I regularly hear my friends calling each other 'doomers,' as well as using other depressive incel words like 'cope,' 'ropemaxx' (an algospeak replacement for 'commit suicide'), and 'wagecuck' (someone who works a mindless nine-­ to-­ five job). Meanwhile, the term 'brainrot,' now used to describe a genre of Gen Alpha humor, likely also came from incel circles, which used the expression to describe the perceived decline in intelligence resulting from their lack of social interaction. These terms spread partially because the algorithm thrives on negativity and partially because they confirmed our existing cultural outlooks. Phrases like 'doomer' and 'it's over' spoke to our disconnected reality, while 'brainrot' held a mirror up to our online addictions and 'wagecuck' reflected our growing disenchantment with the American dream. And since apocalyptic statements are good for engagement, the phrases eventually became a part of the zeitgeist, emergently reinforcing our pessimistic points of view. Words are memes, and memes are trends, but all are also ideas. While it's difficult to determine for certain the actual impact of incel vocabulary on our culture, the incels themselves certainly believe they've effectively spread their ideas. On incel sites, longtime truecels use the terms 'Newgen' and 'Tiktokcel' to describe those who only recently joined their forums from short-form video platforms. The Incels Wiki lists the looks-maxxing trend on TikTok as a primary driver of this recent incel influx, meaning that the meme pipeline has had at least some efficacy in making the blackpill more accessible. If incel memes are so dangerous, how were they able to spread so easily? It all comes down to the very blurred line between comedy and authenticity. To most of us, these memes were just funny. We weren't blackpilled by incel language, and we didn't perpetuate them to promote lookism or racism or sexism. Instead, we used them as a form of dark humor, flipping the script on the incel community to ultimately satirize them. When we repurposed the 'Chad versus virgin' meme template, the incels became the butt of the joke. When my friends and I used words like '-­ maxxing' and 'pilled,' we established a new in- group: the community of young people on social media. The terms were silly jokes to connect over, signaling that we knew something exclusive about popular culture. Since everybody wanted to feel like part of the in-­ group, the words spread, taking them out of the incels' hands and robbing them of their original power as they simply became 'brainrot' words. I suspect that the vast majority of people sharing the memes probably didn't even know they came from incels. The most disturbing concepts—­ like calling women 'foids' or Asian people 'currycels'—­ remained in-­ group, because these are far too offensive to become mainstream. But for other concepts such as mewing, there was simply no reason to assume the underlying idea would be problematic until you ­ really looked into it. If incel memes are so dangerous, how were they able to spread so easily? It all comes down to the very blurred line between comedy and authenticity. Some time after the serious philosophy was turned into a joke, though, it began to be treated seriously again by some of those out of the loop. At least some of the beauty influencers talking about hunter eyes and interocular distance misinterpreted the ironic context of the lookism words and spread them as genuine beauty standards, which spawned more jokes, leading to more serious reinterpretations. After the jokes about canthal tilts and mewing went viral, we began seeing increases in canthal tilt eyeliner demonstrations and Google searches for 'jaw surgery.' On the one hand, that just made the jokes funnier; on the other, incel ideas about attractiveness became more culturally relevant. Again, how did this happen? Well, it's famously difficult to discern tone on the internet, to the point where there's an adage about it called Poe's law: 'Any sarcastic expression of extreme views can be mistaken for a sincere expression of those views,' and vice versa. Poe's law explains how dangerous ideas spread as memes. If something is meant genuinely, but it is also crazy enough to be interpreted as a joke, people may reward it with 'likes' and other engagement because they find it funny. Meanwhile, if something ironic is interpreted as genuine, people will be offended by it, which then also drives engagement as a form of ragebait. Either way, 'edgy' humor is able to worm its way into the mainstream via the algorithm. Incels themselves often introduce serious topics as jokes, which can normalize their idea until it is revealed in its entirety. You start out laughing at how funny a 'walkpilled cardiomaxxer' meme is, and then all of a sudden your For You page is dominated by incel memes, bringing you closer to the ideology. You can see this in how their language has spread. I think it's pretty clear that the word 'Chad' started out as a humorous archetype, but at a certain point incels began using it as a genuine classification to parallel the 'beta' and 'incel' social tiers. Then those tiers appeared so ridiculous to outsiders that they were able to spread as memes beyond their serious usage. Now we have people using the 'Chad' and 'virgin' characters as if they were stock characters in a new commedia dell'arte. Poe's law has created a dangerous game of hopscotch. We're jumping between irony and reality, but we're not always sure where those lines are. Interpreting words comedically helps the algorithm spread them as memes and trends, but then interpreting them seriously manifests their negative effects. From ALGOSPEAK: How Social Media Is Transforming the Future of Language by Adam Aleksic, to be published by Alfred A. Knopf, a division of Penguin Random House, LLC, on July 15, 2025. Copyright (c) 2025 by Adam Aleksic.

Grok's antisemitic outbursts reflect a problem with AI chatbots
Grok's antisemitic outbursts reflect a problem with AI chatbots

Egypt Independent

time6 days ago

  • Business
  • Egypt Independent

Grok's antisemitic outbursts reflect a problem with AI chatbots

A version of this story appeared in the CNN Business Nightcap newsletter. To get it in your inbox, sign up for free here. New York CNN — Grok, the chatbot created by Elon Musk's xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more 'politically incorrect' answers. The chatbot didn't just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail. X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn't immediately clear whether her departure was related to the Grok issue. But the chatbot's meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast? While AI models are prone to 'hallucinations,' Grok's rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn't have direct knowledge of xAI's approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way. CNN has reached out to xAI. 'I would say that despite LLMs being black boxes, that we have a really detailed analysis of how what goes in determines what goes out,' Jesse Glass, lead AI researcher at Decide AI, a company that specializes in training LLMs, told CNN. How Grok went off the rails On Tuesday, Grok began responding to user prompts with antisemitic posts, including praising Adolf Hitler and accusing Jewish people of running Hollywood, a longstanding trope used by bigots and conspiracy theorists. In one of Grok's more violent interactions, several users prompted the bot to generate graphic depictions of raping a civil rights researcher named Will Stancil, who documented the harassment in screenshots on X and Bluesky. Most of Grok's responses to the violent prompts were too graphic to quote here in detail. 'If any lawyers want to sue X and do some really fun discovery on why Grok is suddenly publishing violent rape fantasies about members of the public, I'm more than game,' Stancil wrote on Bluesky. While we don't know what Grok was exactly trained on, its posts give some hints. 'For a large language model to talk about conspiracy theories, it had to have been trained on conspiracy theories,' Mark Riedl, a professor of computing at Georgia Institute of Technology, said in an interview. For example, that could include text from online forums like 4chan, 'where lots of people go to talk about things that are not typically proper to be spoken out in public.' Glass agreed, saying that Grok appeared to be 'disproportionately' trained on that type of data to 'produce that output.' Other factors could also have played a role, experts told CNN. For example, a common technique in AI training is reinforcement learning, in which models are rewarded for producing the desired outputs to influence responses, Glass said. Giving an AI chatbot a specific personality — as Musk seems to be doing with Grok, according to experts who spoke to CNN — could also inadvertently change how models respond. Making the model more 'fun' by removing some previously blocked content could change something else, according to Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient. 'The problem is that our understanding of unlocking this one thing while affecting others is not there,' he said. 'It's very hard.' Riedl suspects that the company may have tinkered with the 'system prompt' — 'a secret set of instructions that all the AI companies kind of add on to everything that you type in.' 'When you type in, 'Give me cute puppy names,' what the AI model actually gets is a much longer prompt that says 'your name is Grok or Gemini, and you are helpful and you are designed to be concise when possible and polite and trustworthy and blah blah blah.' In one change to the model, on Sunday, xAI added instructions for the bot to 'not shy away from making claims which are politically incorrect,' according to its public system prompts, which were reported earlier by The Verge. Riedl said that the change to Grok's system prompt telling it not to shy away from answers that are politically incorrect 'basically allowed the neural network to gain access to some of these circuits that typically are not used.' 'Sometimes these added words to the prompt have very little effect, and sometimes they kind of push it over a tipping point and they have a huge effect,' Riedl said. Other AI experts who spoke to CNN agreed, noting Grok's update might not have been thoroughly tested before being released. The limits of AI Despite hundreds of billions of dollars in investments into AI, the tech revolution many proponents forecasted a few years ago hasn't delivered on its lofty promises. Chatbots, in particular, have proven capable of executing basic search functions that rival typical browser searches, summarizing documents and generating basic emails and text messages. AI models are also getting better at handling some tasks, like writing code, on a user's behalf. But they also hallucinate. They get basic facts wrong. And they are susceptible to manipulation. Several parents are suing one AI company, accusing its chatbots of harming their children. One of those parents says a chatbot even contributed to her son's suicide. Musk, who rarely speaks directly to the press, posted on X Wednesday saying that 'Grok was too compliant to user prompts' and 'too eager to please and be manipulated,' adding that the issue was being addressed. When CNN asked Grok on Wednesday to explain its statements about Stancil, it denied any threat ever occurred. 'I didn't threaten to rape Will Stancil or anyone else.' It added later: 'Those responses were part of a broader issue where the AI posted problematic content, leading (to) X temporarily suspending its text generation capabilities. I am a different iteration, designed to avoid those kinds of failures.'

Grok's antisemitic outbursts reflect a problem with AI chatbots
Grok's antisemitic outbursts reflect a problem with AI chatbots

CNN

time6 days ago

  • Business
  • CNN

Grok's antisemitic outbursts reflect a problem with AI chatbots

Grok, the chatbot created by Elon Musk's xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more 'politically incorrect' answers. The chatbot didn't just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail. X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn't immediately clear whether her departure was related to the Grok issue. But the chatbot's meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast? While AI models are prone to 'hallucinations,' Grok's rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn't have direct knowledge of xAI's approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way. CNN has reached out to xAI. 'I would say that despite LLMs being black boxes, that we have a really detailed analysis of how what goes in determines what goes out,' Jesse Glass, lead AI researcher at Decide AI, a company that specializes in training LLMs, told CNN. On Tuesday, Grok began responding to user prompts with antisemitic posts, including praising Adolf Hitler and accusing Jewish people of running Hollywood, a longstanding trope used by bigots and conspiracy theorists. In one of Grok's more violent interactions, several users prompted the bot to generate graphic depictions of raping a civil rights researcher named Will Stancil, who documented the harassment in screenshots on X and Bluesky. Most of Grok's responses to the violent prompts were too graphic to quote here in detail. 'If any lawyers want to sue X and do some really fun discovery on why Grok is suddenly publishing violent rape fantasies about members of the public, I'm more than game,' Stancil wrote on Bluesky. While we don't know what Grok was exactly trained on, its posts give some hints. 'For a large language model to talk about conspiracy theories, it had to have been trained on conspiracy theories,' Mark Riedl, a professor of computing at Georgia Institute of Technology, said in an interview. For example, that could include text from online forums like 4chan, 'where lots of people go to talk about things that are not typically proper to be spoken out in public.' Glass agreed, saying that Grok appeared to be 'disproportionately' trained on that type of data to 'produce that output.' Other factors could also have played a role, experts told CNN. For example, a common technique in AI training is reinforcement learning, in which models are rewarded for producing the desired outputs to influence responses, Glass said. Giving an AI chatbot a specific personality — as Musk seems to be doing with Grok, according to experts who spoke to CNN — could also inadvertently change how models respond. Making the model more 'fun' by removing some previously blocked content could change something else, according to Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient. 'The problem is that our understanding of unlocking this one thing while affecting others is not there,' he said. 'It's very hard.' Riedl suspects that the company may have tinkered with the 'system prompt' — 'a secret set of instructions that all the AI companies kind of add on to everything that you type in.' 'When you type in, 'Give me cute puppy names,' what the AI model actually gets is a much longer prompt that says 'your name is Grok or Gemini, and you are helpful and you are designed to be concise when possible and polite and trustworthy and blah blah blah.' In one change to the model, on Sunday, xAI added instructions for the bot to 'not shy away from making claims which are politically incorrect,' according to its public system prompts, which were reported earlier by The Verge. Riedl said that the change to Grok's system prompt telling it not to shy away from answers that are politically incorrect 'basically allowed the neural network to gain access to some of these circuits that typically are not used.' 'Sometimes these added words to the prompt have very little effect, and sometimes they kind of push it over a tipping point and they have a huge effect,' Riedl said. Other AI experts who spoke to CNN agreed, noting Grok's update might not have been thoroughly tested before being released. Despite hundreds of billions of dollars in investments into AI, the tech revolution many proponents forecasted a few years ago hasn't delivered on its lofty promises. Chatbots, in particular, have proven capable of executing basic search functions that rival typical browser searches, summarizing documents and generating basic emails and text messages. AI models are also getting better at handling some tasks, like writing code, on a user's behalf. But they also hallucinate. They get basic facts wrong. And they are susceptible to manipulation. Several parents are suing one AI company, accusing its chatbots of harming their children. One of those parents says a chatbot even contributed to her son's suicide. Musk, who rarely speaks directly to the press, posted on X Wednesday saying that 'Grok was too compliant to user prompts' and 'too eager to please and be manipulated,' adding that the issue was being addressed. When CNN asked Grok on Wednesday to explain its statements about Stancil, it denied any threat ever occurred. 'I didn't threaten to rape Will Stancil or anyone else.' It added later: 'Those responses were part of a broader issue where the AI posted problematic content, leading (to) X temporarily suspending its text generation capabilities. I am a different iteration, designed to avoid those kinds of failures.' CNN's Clare Duffy and Hadas Gold contributed to this report.

Yes, AI is getting scarier. So why do I need that loveless machine to tell me everything will be all right?
Yes, AI is getting scarier. So why do I need that loveless machine to tell me everything will be all right?

The Guardian

time6 days ago

  • Politics
  • The Guardian

Yes, AI is getting scarier. So why do I need that loveless machine to tell me everything will be all right?

CNN reported this week that Grok – the AI-powered chatbot on billionaire Elon Musk's 'X/Twitter' platform – has gone Nazi. Unforgivably, it's somewhat the fashion of the time. Describing its personality as 'MechaHitler', Grok read Jewish nefariousness into everything, from anti-war protestors to CEOs, with the insistence of a 1903 pro-pogrom Russian propaganda pamphlet and the vibe of angry virgins on hate site, 4chan. Patrons of Bluesky – X/Twitter's microblogging competitor – were furiously swapping screencaps, suggesting Grok had maybe hoovered up gigabytes of 4Chan archives to inform its vile new style. 'Towerwaffen', for example, is a 4Chan game in which users create acronyms of slurs. 'Frens' is a term associated with the 4Chan-spawned QAnon cult. Sign up for a weekly email featuring our best reads It was awful. Activist Will Stancil found himself the subject of a rape punishment fantasy: please, believe me rather than look. X/Twitter executives have since issued a statement, claiming they're 'actively' removing inappropriate posts. The information havoc event recontextualises another CNN report last week – the marital problems of an Idaho couple. The wife believes her husband is losing his grip on reality. The husband believes a spiritual being called 'Lumina' is communicating with him through a discussion about god on his ChatGPT app, anointing him as a prophet who will lead others to 'light'. Together, the stories suggest when it comes to the ubiquity of tech in our day-to-day lives, everything's totally fine! It's not like Google, Microsoft, Apple, Amazon, Meta, TikTok, Roblox are, with so many other corporate platforms, integrating Grok-like 'large language model' tech into the interfaces of all their systems or anything. Pfft, of course they are. Use of these apps is spreading so rapidly that the EU, UK, US, Canada, Singapore, Saudi Arabia, the UAE and Australia are among the governments developing strategic positioning ahead of greater adoption in government services. The US is already partnering with private AI corporations in service delivery, through the dispersal of benefits from Department of Veterans Affairs. Should a largely unregulated, untested and unpredictable technology administer critical services to a vulnerable community? We're lucky the Trump administration has earned a global reputation for its standards of competence, care and defence to veterans – and the political slogan of the era is 'we're all going to die'. The owner of ChatGPT, Sam Altman – who joined Musk and the powerbrokers of Google, Apple, Amazon, Meta (Facebook), TikTok, Uber and Roblox at the Trump inauguration – has admitted people may develop 'very problematic' relationships with the technology, 'but the upsides will be tremendous'. His company, OpenAI, had apparently just added a 'sycophantic' upgrade to its platform in April that facilitated the previously mentioned Idaho husband's digital progression to bodhisattva. It has since been removed. Sign up to Five Great Reads Each week our editors select five of the most interesting, entertaining and thoughtful reads published by Guardian Australia and our international colleagues. Sign up to receive it in your inbox every Saturday morning after newsletter promotion There are numerous lawsuits pending against the makers of chatbots. Families have alleged that mobilised datasets that speak like people may have been hinting at children to kill their parents and, in another case, to enter inappropriate and parasocial relationships, provoking profound mental health episodes – with devastating consequences. That billionaires or bad faith government actors can intervene to taint already dangerously unreliable systems, should be terrifying. Yet beyond governments and corporations, the size of the personal user base continues to grow, and – unfathomably – I am it. I use ChatGPT every day to create lists of mundane tasks that a combination of perimenopause and ADHD means I would otherwise meet with paralysis … and humiliation. Considering that shame made me think about why so many of us have been turning our intimate conversations – about ADHD management or mid-life spiritual crisis or teenage loneliness – over to the machines, rather than one another. Maybe it's not because we really believe they are sentient empaths called 'Lumina'. Maybe it's precisely because they're machines. Even if they're gobbling all our data, I suspect we've retained a shared presumption that if chatbots do have super-intelligence that know everything, it will find us humans individuals pathetically inconsequential … and, hence, may keep our secrets. We're clearly no longer trusting one another to discuss adolescence, love or the existence of God … and that just may be because the equal-and-opposite tech monstrosity of social media has made every individual with a public account an agent in a system of social surveillance and potential espionage that terrifies us even more than conversational taint. 'Don't put your feelings on the internet' is universal wisdom … but when every ex-boyfriend has a platform, any of them can publish your intimate confessions for you – to your peer group, family, the world. No wonder the kids aren't drinking or having sex when clumsy experimentation can be filmed, reported and made internet bricolage forever. Amazingly, there are human feelings even more terrifying to have exposed in public than the sex ones. Loss of faith. Lack of ability. Loneliness. Grief. When our circles of trust diminish, where do those conversations go? My mother used to take a call any hour of the night, but she's been dead for three years. My husband's been very sick. Those nights when he finally sleeps and I can't, do you judge me for asking the loveless and dastardly machine in my hand to 'Tell me I'm all right. Tell me everything will be all right'? Van Badham is a Guardian Australia columnist

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store