AI is changing more than your writing — it may be shaping your worldview

By USC Dornsife News

Image: Valentin Ivantsov - pexels

Use of ChatGPT, Claude and other large language models, or LLMs — what most people call “AI” — has surged since ChatGPT debuted publicly in 2022. Hundreds of millions of people now use these tools weekly, according to recent estimates.

Users might assume these tools are just helping them organize their thoughts, but recent research suggests they may be doing something more subtle and more powerful — influencing how we all think, speak and even understand the world.

In a recent opinion piece, researchers at the USC Dornsife College of Letters, Arts and Sciences, investigated how artificial intelligence systems like ChatGPT could be nudging people toward similar ways of communicating and reasoning — a process researchers call “cultural homogenization.”

“AI isn’t just reflecting culture anymore,” said lead author Yalda Daryani, a PhD student in social psychology at USC Dornsife. “It’s actively shaping it. It’s deciding what sounds polite, what sounds clear, even what counts as a good answer.”

So the researchers set out to understand how large language models like ChatGPT, Anthropic’s Claude and Google’s Gemini might influence human culture on a global scale, and how policies could address the broader effects these LLMs might have.

A pattern emerges with AI use

The researchers — under the guidance of Morteza Dehghani, professor of psychology and computer science at USC Dornsife and head of the Morality and Language Lab — reviewed a wide range of recent studies across psychology, computer science and linguistics to understand how LLMs perform across different cultures and how people respond when using AI in real-world tasks such as writing or decision-making.

They found a consistent pattern: AI systems tend to reflect and reinforce a narrow slice of human experience.

A central finding of the research is that these systems often align with what the researchers describe as “WHELM” perspectives — Western, high-income, educated, liberal and male. In other words, they reflect the values and communication styles most common in English-language online data.

“When you ask AI for advice, you’re not getting a neutral answer,” Daryani said. “You’re getting the perspective of a very specific group of people, even if it doesn’t say that explicitly.”

This pattern appears in how AI handles moral questions. The research showed that AI systems tend to favor values such as individual freedom and fairness, while placing less emphasis on ideas like tradition, authority and community, which are more central in many non-Western cultures.

AI’s impact extends to subtle social interactions

The influence goes beyond values. It also affects how people communicate.

“When millions of people use AI to draft messages, those differences start to disappear,” Daryani said. “Over time, we may all start sounding very alike.”

Even when users ask questions in other languages, the models often return examples tied to American or European culture — such as U.S. holidays or English-language films — while offering less detailed or more stereotypical descriptions of non-Western traditions.

Dehghani says this pattern creates a kind of feedback loop. “The more we rely on these systems, the more their outputs become part of our shared knowledge, and then that same material gets used to train the next generation of AI. So the cycle reinforces itself.”

That loop, the researchers warn, could gradually narrow the range of ideas, traditions and communication styles that people are exposed to and pass on over time.

Why does that matter? Because cultural diversity isn’t just about language or customs, the researchers say. It shapes how people think, solve problems and make decisions. A wide range of perspectives can lead to better solutions and more creative ideas. If that diversity shrinks, the researchers argue, society could lose important ways of understanding the world.

How to build a better AI

Of note, the team does not suggest that AI is inherently harmful. LLMs can make writing easier, improve access to information and help people communicate more clearly. The concern, the researchers say, is what happens when a small number of systems begin to influence billions of interactions every day.

“Once the system is trained on a narrow set of data, it’s very hard to undo that,” Daryani said.

To address the issue, the team outlines a three-part approach based on their study findings, beginning with the data used to train models. Most AI systems learn from English-language content drawn heavily from Western sources. The researchers say developers should include more material from different languages, regions and cultural traditions to capture cultural knowledge that might otherwise be systematically underrepresented.

During later training stages aimed at refining and evaluating LLMs, the researchers suggest incorporating culturally diverse examples as well as consulting experts such as psychologists, anthropologists, linguists, and policymakers working in collaboration with diverse cultural communities to ensure responses reflect different social norms and values.

They then recommend changing how the training results are judged. Tech companies do employ workers from a variety of countries during this step, but those workers are trained to apply standardized Western evaluation criteria. Instead, reviewers should evaluate answers based on multiple standards.

Taken together, these changes could help AI systems recognize that there is no one “correct” way to communicate or reason, preserving a broader range of human perspectives as the technology continues to evolve.

For Daryani, the stakes are clear: “Languages, traditions, ways of thinking — once they disappear, we can’t get them back. The question isn’t whether this is difficult to fix. It’s whether we can afford not to.”

About the study

Zhivar Sourati, a PhD student at the USC Viterbi School of Engineering, was a co-author of the report, published in Policy Insights from the Behavioral and Brain Sciences.

Originally published by USC Dornsife College of Letters, Arts and Sciences News. Republished here with permission.

Reviewed by Irfan Ahmad.

Read next: In the face of rampant AI, is ‘data poisoning’ a new form of civil disobedience?
Previous Post Next Post