A new series of behavioral studies sheds light on a growing question in the age of artificial intelligence, that is, can machines truly offer the kind of emotional support people value from other humans? Across nine experiments involving more than 6,000 participants, researchers tested how people reacted to responses they believed came from either a person or an AI system, even when the words were exactly the same. The research is a preprint and has not yet undergone peer review.
The findings reveal a consistent pattern. When people thought they were hearing from another person, they felt more understood, more supported, and more emotionally connected, even though the messages had been generated by an AI model. The human label alone carried weight.
Participants who were told the response came from a person rated it higher in terms of emotional richness, sincerity, and overall helpfulness. They also reported fewer negative feelings after reading the reply and were more inclined to continue the interaction. The gap was especially wide when the responses emphasized emotional resonance or care, rather than just cognitive understanding.
This preference remained steady across various setups. Whether the conversation was brief or included multiple message exchanges, whether the AI was branded or open-source, or whether the participants waited seconds or minutes for a response, the results repeated themselves. People felt emotionally closer to replies labeled as human, and further removed when the same replies were labeled as machine-generated.
To dig deeper, the researchers tested which ingredients of empathy mattered most. Empathy, in their framework, included three parts: recognizing emotions, feeling alongside the other, and showing care. AI could mimic all three, but the perception of shared feeling and concern remained tightly linked to human authorship. When messages focused only on understanding someone’s feelings, the cognitive side, the difference between human and AI perception narrowed. But once emotional sharing or concern entered the equation, people again leaned strongly toward the human side.
Another part of the study tested how far people would go to preserve that sense of connection. When given the option, many participants chose to wait hours, days, or even weeks to receive a message they believed was from a human rather than accept an instant AI reply. A separate group was even willing to wait just to have their story read by a human, with no promise of a reply.
Those who opted to wait said they believed a human would better understand, care more, and help ease loneliness. On the other hand, those who picked AI said they valued speed, convenience, or felt unsure about opening up to a stranger. The motivation to wait was not rooted in time but in trust, and the symbolic weight of being heard by a real person.
Even the idea that a human-sounding response might have been machine-assisted chipped away at its emotional impact. When people suspected a human had help from AI, they rated the message as less supportive and less genuine. In contrast, those who believed a machine-written reply had been reviewed by a human felt more warmth toward it. The sense of human touch mattered more than the source.
Despite these results, AI was not rejected outright. Most participants still rated AI-generated messages as quite empathic, and some preferred them under certain conditions. Fast, thoughtful replies without judgment or fatigue offered clear benefits, especially for people who wanted to avoid uncomfortable exposure or just needed acknowledgment quickly.
What the research suggests, though, is that humans place unique value on emotional labor. The effort, attention, and shared vulnerability that go into human interaction still mean something, even in digital form. When support feels effortless or synthetic, the emotional return tends to shrink. And no matter how skillful an algorithm becomes, people seem to know when care is real or merely simulated.
As AI becomes more embedded in healthcare, education, and emotional support services, the study raises critical points. Machines can scale care, provide structured empathy, and offer instant response in ways humans cannot. But when it comes to making people feel seen, heard, and held emotionally, that weight still leans toward people.
The challenge ahead isn’t about choosing AI or humans. It’s about knowing when the moment calls for real presence, not just appropriate language. And for now, emotional presence remains a human gift that machines have yet to master.
Notes: Image: Igor Omilaev unsplash. This post was edited/created using GenAI tools.
Read next: New Study Shows How Search Queries Reflect Personal Biases and Limit Exposure to Contrasting Information
The findings reveal a consistent pattern. When people thought they were hearing from another person, they felt more understood, more supported, and more emotionally connected, even though the messages had been generated by an AI model. The human label alone carried weight.
Participants who were told the response came from a person rated it higher in terms of emotional richness, sincerity, and overall helpfulness. They also reported fewer negative feelings after reading the reply and were more inclined to continue the interaction. The gap was especially wide when the responses emphasized emotional resonance or care, rather than just cognitive understanding.
This preference remained steady across various setups. Whether the conversation was brief or included multiple message exchanges, whether the AI was branded or open-source, or whether the participants waited seconds or minutes for a response, the results repeated themselves. People felt emotionally closer to replies labeled as human, and further removed when the same replies were labeled as machine-generated.
To dig deeper, the researchers tested which ingredients of empathy mattered most. Empathy, in their framework, included three parts: recognizing emotions, feeling alongside the other, and showing care. AI could mimic all three, but the perception of shared feeling and concern remained tightly linked to human authorship. When messages focused only on understanding someone’s feelings, the cognitive side, the difference between human and AI perception narrowed. But once emotional sharing or concern entered the equation, people again leaned strongly toward the human side.
Another part of the study tested how far people would go to preserve that sense of connection. When given the option, many participants chose to wait hours, days, or even weeks to receive a message they believed was from a human rather than accept an instant AI reply. A separate group was even willing to wait just to have their story read by a human, with no promise of a reply.
Those who opted to wait said they believed a human would better understand, care more, and help ease loneliness. On the other hand, those who picked AI said they valued speed, convenience, or felt unsure about opening up to a stranger. The motivation to wait was not rooted in time but in trust, and the symbolic weight of being heard by a real person.
Even the idea that a human-sounding response might have been machine-assisted chipped away at its emotional impact. When people suspected a human had help from AI, they rated the message as less supportive and less genuine. In contrast, those who believed a machine-written reply had been reviewed by a human felt more warmth toward it. The sense of human touch mattered more than the source.
Despite these results, AI was not rejected outright. Most participants still rated AI-generated messages as quite empathic, and some preferred them under certain conditions. Fast, thoughtful replies without judgment or fatigue offered clear benefits, especially for people who wanted to avoid uncomfortable exposure or just needed acknowledgment quickly.
What the research suggests, though, is that humans place unique value on emotional labor. The effort, attention, and shared vulnerability that go into human interaction still mean something, even in digital form. When support feels effortless or synthetic, the emotional return tends to shrink. And no matter how skillful an algorithm becomes, people seem to know when care is real or merely simulated.
As AI becomes more embedded in healthcare, education, and emotional support services, the study raises critical points. Machines can scale care, provide structured empathy, and offer instant response in ways humans cannot. But when it comes to making people feel seen, heard, and held emotionally, that weight still leans toward people.
The challenge ahead isn’t about choosing AI or humans. It’s about knowing when the moment calls for real presence, not just appropriate language. And for now, emotional presence remains a human gift that machines have yet to master.
Notes: Image: Igor Omilaev unsplash. This post was edited/created using GenAI tools.
Read next: New Study Shows How Search Queries Reflect Personal Biases and Limit Exposure to Contrasting Information