Young people have always felt misunderstood by the adults around them. That’s not new. What’s changing is the gap. It’s widening. Now even artificial intelligence can’t keep up with Gen Alpha.
At a recent tech conference in Athens focused on fairness and accountability, a student named Manisha Mehta presented research that points to a surprising issue. Kids’ fast-changing slang is often completely missed by the AI systems meant to keep them safe online.
Mehta's study looked at how well kids, their parents, and professional moderators could handle modern slang, comparing them to four well-known AI language tools developed by OpenAI, Google, Anthropic, and Meta. The goal was simple that is to see if people and machines could figure out what the slang actually meant, understand when the tone changed, and catch possible hidden risks.
To put the research together, Mehta worked with 24 classmates to build a list of 100 Gen Alpha phrases. Some phrases could either support or tease, depending on how and when they were used. Others came straight from gaming and social media circles. Expressions like “let him cook” or “ate that up” could either cheer someone on or poke fun at them. Words like “got ratioed” or “secure the bag” were pulled from the fast-moving world of online chats and games.
One of the key things that stood out was how often adults completely missed what these phrases meant. Parents and moderators were often left guessing, while the AI tools weren’t much better. The study makes it clear: many of the systems meant to keep kids safe simply don’t understand the language they’re using.
When the kids were tested on meanings, shifting tones, and spotting hidden harm, they almost always got it right. Their scores stayed high across the board. Parents, though, struggled badly. They often missed key meanings and failed to notice when a friendly phrase turned hurtful. Professional moderators didn’t do much better.
What this really shows is that adults, whether they’re at home or working to keep social platforms safe, can’t fully protect kids if they don’t understand the language those kids are using. A parent might only catch one out of every three moments when their child is quietly mocked or bullied in Instagram comments.
When tested on the same slang, the four AI tools landed roughly where the parents did. This suggests the data used to train these systems probably comes from more adult-focused language. Since most of what’s written in books or online comes from older people, it makes sense that these AI tools haven’t fully absorbed the latest slang from teenagers.
There’s more at stake here than just missed meanings. Gen Alpha, born in the years after smartphones became part of everyday life, has grown up fully connected to the internet. Many of their earliest social experiences have happened online, far from the view of parents and teachers. The systems built to watch over them can’t easily keep up, especially since much of the moderation now depends on automated tools. Parents can’t watch every post or chat, and even professional moderators miss things hidden in what seems like harmless talk. Meanwhile, kids’ slang keeps moving so quickly that what’s popular today could easily sound old in just a few months.
The study points to a subtle but growing gap. It’s not just a difference in age. It’s a difference in language. And when children and the systems meant to protect them don’t speak the same language, danger can easily slip through unseen.
Image: DIW-Aigen
Read next: Human vs. AI Perception: Research Uncovers Striking Differences in Object Recognition
At a recent tech conference in Athens focused on fairness and accountability, a student named Manisha Mehta presented research that points to a surprising issue. Kids’ fast-changing slang is often completely missed by the AI systems meant to keep them safe online.
Mehta's study looked at how well kids, their parents, and professional moderators could handle modern slang, comparing them to four well-known AI language tools developed by OpenAI, Google, Anthropic, and Meta. The goal was simple that is to see if people and machines could figure out what the slang actually meant, understand when the tone changed, and catch possible hidden risks.
To put the research together, Mehta worked with 24 classmates to build a list of 100 Gen Alpha phrases. Some phrases could either support or tease, depending on how and when they were used. Others came straight from gaming and social media circles. Expressions like “let him cook” or “ate that up” could either cheer someone on or poke fun at them. Words like “got ratioed” or “secure the bag” were pulled from the fast-moving world of online chats and games.
One of the key things that stood out was how often adults completely missed what these phrases meant. Parents and moderators were often left guessing, while the AI tools weren’t much better. The study makes it clear: many of the systems meant to keep kids safe simply don’t understand the language they’re using.
When the kids were tested on meanings, shifting tones, and spotting hidden harm, they almost always got it right. Their scores stayed high across the board. Parents, though, struggled badly. They often missed key meanings and failed to notice when a friendly phrase turned hurtful. Professional moderators didn’t do much better.
What this really shows is that adults, whether they’re at home or working to keep social platforms safe, can’t fully protect kids if they don’t understand the language those kids are using. A parent might only catch one out of every three moments when their child is quietly mocked or bullied in Instagram comments.
When tested on the same slang, the four AI tools landed roughly where the parents did. This suggests the data used to train these systems probably comes from more adult-focused language. Since most of what’s written in books or online comes from older people, it makes sense that these AI tools haven’t fully absorbed the latest slang from teenagers.
There’s more at stake here than just missed meanings. Gen Alpha, born in the years after smartphones became part of everyday life, has grown up fully connected to the internet. Many of their earliest social experiences have happened online, far from the view of parents and teachers. The systems built to watch over them can’t easily keep up, especially since much of the moderation now depends on automated tools. Parents can’t watch every post or chat, and even professional moderators miss things hidden in what seems like harmless talk. Meanwhile, kids’ slang keeps moving so quickly that what’s popular today could easily sound old in just a few months.
The study points to a subtle but growing gap. It’s not just a difference in age. It’s a difference in language. And when children and the systems meant to protect them don’t speak the same language, danger can easily slip through unseen.
Image: DIW-Aigen
Read next: Human vs. AI Perception: Research Uncovers Striking Differences in Object Recognition