Something curious is happening on university podiums and conference stages. The words academics choose while speaking have started to shift, not in the usual, gradual way languages evolve, but suddenly, and with a very specific signature.
Researchers in Germany have found that phrases strongly tied to ChatGPT's writing style are popping up more often in academic speech. And the timing isn’t subtle. This change begins right after ChatGPT was released in late 2022.
A Jump, Not a Drift
Normally, speech habits shift slowly, blending across years and generations. Here, that’s not what happened.
Using transcriptions from nearly 280,000 academic videos posted online, mostly lectures, presentations, and public talks, the researchers tracked a jump in the use of certain words. Terms like “delve,” “realm,” and “meticulous” started appearing much more frequently, all within the first year and a half after ChatGPT became widely available.
These weren’t common before. And the rise didn’t match earlier years, ruling out random fluctuation or some separate cultural trend.
The AI Vocabulary Leak
The researchers didn’t stop at noticing word frequency. They dug into what makes these words special. Turns out, they were already known to show up often in ChatGPT-edited documents. So the researchers tried to find out: could exposure to AI-edited text be spilling over into speech?
- Also read: Web Search Promotes Stronger Understanding Than ChatGPT in Knowledge Tasks, Researchers Conclude
The pattern held: the more strongly a word was associated with ChatGPT, the faster it started turning up in human speech. Less distinctive words didn’t show the same trend.
Scripts Don’t Fully Explain It
One possible explanation might be simple i.e.: people are reading from slides or scripted notes that were edited by ChatGPT. That might account for the formal language.
But a closer look showed that many of these word choices appeared in unscripted speech as well. A manual check of videos featuring one of the standout terms, “delve”, showed that in more than half the clips, speakers weren’t reading aloud. They were speaking off the cuff, which means the phrasing came from their own active vocabulary.
So it seems the influence of AI doesn’t end with written drafts. These phrases are starting to live in people’s natural speech.
A Feedback Loop in the Making?
What this shows is subtle, but important. AI isn't just imitating how we write. We’re starting to echo how it writes too.
It’s a loop. We feed language into AI systems, then read and use their outputs, and over time, that shapes how we talk. If this continues, future AIs will train on speech and writing that already carry AI’s earlier influence, recycling patterns back into themselves.
There are concerns here. Language isn’t just a technical tool. It’s tied to culture, identity, even creativity. If we begin sounding too much like the same machine, the range and richness of human expression might start to narrow.
A Trend with Limits, For Now
It’s worth noting that not every AI-favored word is catching on in speech. Words like “underscore” and “groundbreaking,” for example, show up plenty in edited writing, but didn’t see much of a boost in spoken use. The trend is clear, but it hasn’t swept across the board.
The study only looked at academic videos, where people are more likely to use formal language. It’s possible that in more casual or personal settings, these effects won’t be as strong. But then again, tools like ChatGPT are creeping into emails, blog and news writings, class discussions, and casual prep work too. The boundary isn’t rigid.
Where This Might Lead
The idea that AI could become a cultural influence, not just a tool, is no longer theoretical. It’s already happening, at least in one part of society. What’s less clear is how far this could go.
If everyone starts using the same AI-suggested words, we might lose more than just variety. We might start thinking in narrower ways, guided by how these tools frame ideas. That’s not a problem with technology itself, it’s a question of how much space we leave for surprise, nuance, and human difference.
Whether this is a passing phase or a deeper shift depends on what we do next. But one thing is clear, we’re not just teaching machines to sound like us anymore. They’re starting to teach us how to talk.
While the study didn’t measure emotional tone or vocabulary richness directly, it opens the door to broader concerns. As AI-favored phrasing becomes more common, there’s a risk it may edge out the less polished, more expressive language that adds texture to human communication. The smooth, formal voice AI tends to produce often lacks the quirks, metaphors, or local color that give speech its cultural depth. This isn’t without precedent, past technologies like the telegraph or Twitter shaped how we speak, trading nuance for efficiency or structure. But when machines that mimic us begin influencing the very language we use, what we gain in clarity might come at the cost of spontaneity and emotional range.
This post was created/edited using GenAI tools.
Read next: Android 16 Introduces New Protections Against Suspicious Mobile Networks