top of page

Algospeak: The New Language of the Internet

Jul 12, 2024

Screenshot 2024-07-11 at 12.02.21.png

As social media platforms continually refine their content moderation systems, a new linguistic phenomenon has emerged. Known as "Algospeak," this collection of code words and euphemisms is used by online creators, drug dealers, child predators, and other criminal parties to circumvent the algorithms that control what content is visible or flagged. These coded terms can appear within posts, videos, comments, and group names, allowing users to evade detection.

 

The Evolution of Algospeak

Algospeak has developed in response to increasingly sophisticated content moderation algorithms deployed by major social media platforms, marketplace, and others. These algorithms are designed to filter out inappropriate content, enhance user safety, and maintain a brand-friendly environment. While these algorithms have successfully identified and removed vast amounts of restricted or harmful content, they have inadvertently encouraged users to find creative ways to discuss sensitive topics without triggering automated moderation. To avoid demonetization, shadow banning, criminal activity detection, or outright content removal, creators have invented Algospeak. 

 

Implications of Algospeak

For content creators, Algospeak presents a complex challenge. While it enables the discussion of critical topics that might otherwise be suppressed, it also risks creating misunderstandings and diluting the impact of essential information. Euphemisms and coded language can obscure the gravity of specific issues, potentially hindering public awareness and meaningful discourse.

 

For social media platforms, Algospeak complicates the balance between safeguarding users and upholding free expression. As algorithms adapt to recognize new forms of Algospeak, creators will continue to innovate, perpetuating a cycle of evasion and detection.

 

Algospeak and Criminal Intent

While Algospeak can be used by influencers to avoid demonetization or redirect followers to other platforms, it also has much darker implications. Criminals such as Child predators employ these coded languages to discuss and share illicit material without detection, while drug dealers or arms traffickers use similar tactics to discreetly sell narcotics. Other criminal parties exploit Algospeak to coordinate illegal activities, making it challenging for platforms and law enforcement to intervene and protect vulnerable users. This pervasive misuse underscores the need for robust detection and moderation strategies.

 

Users utilize these coded Algospeak languages, incorporating typos, emojis, and code words, to circumvent content moderation controls and law enforcement; here are some examples.

 

 

 

 

 

 

Measures to Detect and De-platform Users Misusing Algospeak

As Algospeak continues to evolve, platforms face an ongoing challenge in identifying and mitigating its misuse. Here are three measures that could help platforms more effectively detect and de-platform users who exploit Algospeak for harmful purposes:

  • Enhanced Contextual Analysis: Platforms can invest in advanced natural language processing (NLP) technologies that go beyond keyword detection to understand the context in which words and phrases are used. By analyzing the surrounding text, user behavior, and engagement patterns, these systems can better discern when Algospeak is being used to bypass moderation. For instance, algorithms could be trained to recognize when benign terms are used in suspicious contexts or when communication patterns align with known evasive tactics.

  • Utilizing Large Language Models (LLMs): LLMs can be employed to extract keywords, understand context, and recognize patterns in large datasets. This enables platforms to efficiently detect and adapt to evolving Algospeak tactics, thereby enhancing the effectiveness of content moderation systems.

  • Human-AI Hybrid Moderation Teams: Combining automated systems with human moderators can enhance the detection of Algospeak. While algorithms can quickly scan vast amounts of data, human moderators bring context-sensitive judgment and cultural understanding to the table. Training moderators to recognize the nuances of Algospeak and providing them with the tools to flag suspicious content can bridge the gap between automated detection and nuanced interpretation. This hybrid approach ensures that moderation efforts are both scalable and sophisticated.

 

The Future of Algospeak

The interplay between Algospeak and content moderation is expected to become increasingly intricate. Fostering greater transparency and dialogue between government agencies, platform operators and content creators could help strike a balance that supports both safety and freedom of expression.

Type
Weapons
Narcotics
Child Exploitation
Codewords
Hole punch, gat, piece (firearms), clips (ammunition), shiv (knife)
Black Paint (Heroin), Baby Powder (Cocaine), Donkey (Ketamine)
CP (Child Porn), Chicken soup (‘caldo de pollo’ in Spanish - Child Porn), chickenhawks (pedophiles)
Emojis
🔫🗡💣
❄️🐉💎
🧀🍕
Typos
G*ns (Guns), Bl@de (Blade), Am0 (Ammo)
C0ca1ne (Cocaine),M3th (Meth), P!lls (Pills)
Ch!ld pr0n (Child porn), Cheese Pizza (Initials for Child Porn), K1dd!e (Kiddie)
Screenshot 2023-02-10 at 11.21_edited.jpg

CONTACT

Give your business the support it deserves

Leave your details and we'll get back to you.

bottom of page