Researchers Examine How AI Interprets Human Personality Using Language and Psychological Models

Researchers at the University of Barcelona have been studying how artificial intelligence models can pick up on personality traits by analyzing the words people write. What sets their work apart is not just teaching AI to make predictions, but also focusing on why the AI makes those choices. They wanted to break into the reasoning of these systems, to see if they follow real psychological logic or if they are simply picking up on obvious patterns that don't really mean much.

The study explored two popular AI models known as BERT and RoBERTa. These models were used to process written texts and identify signs of personality. The researchers worked with two different approaches. One followed the Big Five personality system, which looks at traits like openness, conscientiousness, extraversion, agreeableness, and emotional stability. The other used the Myers-Briggs Type Indicator, a model that groups people based on how they think, decide, and interact.

For their analysis, the team used two sets of text. One set came from essays people wrote about anything they wanted. The other set came from posts on an online forum where people often discussed personality topics. Both sets had already been marked to show where certain personality features appeared. But the main goal was not simply to test whether the AI could guess the correct trait. It was to carefully examine the way the models made their decisions.

The researchers applied a method that tracks which words influence the AI's predictions. This allowed them to see how certain words pull the decision toward a particular personality trait. They found that some words, like those connected to social life or emotions, often played a clear role in guiding the AI's judgment. But there were also tricky cases. For example, words that sound negative, like "hate," could actually be linked to kindness depending on how they were used in the sentence.

One key finding was that the Myers-Briggs dataset had a serious flaw. Many people in those online discussions already knew their own personality type and talked about it directly. This meant the AI often learned to spot those direct references rather than deeper patterns in language. When the researchers removed words that pointed clearly to Myers-Briggs types, the AI's performance dropped sharply. This showed the AI had been relying too much on surface clues. The Big Five system, however, turned out to be more reliable, because the AI models found patterns that made more sense from a psychological point of view, even though the accuracy was not perfect.

This study shows that AI can uncover subtle signs of personality in everyday language that might go unnoticed in traditional tests. These AI methods could make personality assessment feel more natural and less intrusive, especially when dealing with large groups of people. The researchers also believe these techniques could be useful in clinical psychology, education, hiring processes, and even in building virtual assistants that interact in more human-like ways.

Still, the team does not expect AI to fully replace traditional personality tests. Instead, they see these methods working together, each adding something different to the picture. By combining language analysis with other data, like digital habits or behavioral patterns, psychologists may get a more complete understanding of personality.

The researchers want to expand their work by testing texts from different languages and cultural settings to see if the patterns remain the same. They are also interested in exploring how these AI models could help track changes in emotional states or attitudes over time. They hope to combine text analysis with other signals like voice or facial expressions to build a more detailed view of how people express themselves.

Their study emphasizes the importance of making AI systems transparent. Understanding how these models reach their decisions is essential, especially if they are going to be used in real-world situations where accuracy and fairness matter. Building systems that people can trust requires not just good results but also clear explanations of how those results are achieved.


Image: DIW-Aigen

Read next:

• UK Cybersecurity Body Backs Password Managers and Passkeys for Stronger Online Protection

• Study Reveals Gaps in AI Moderation as Youth Slang Outpaces Detection Systems
Previous Post Next Post