Inside the fast-moving world of artificial intelligence, one of Silicon Valley’s most influential voices is pushing back against a rising trend: machines marketed as emotional companions.
Reid Hoffman, who helped launch LinkedIn and has since become a leading investor in AI startups, voiced serious concern this week over tech’s growing attempt to position chatbots as human-like friends. His warning came during a conversation on the Possible podcast, where he challenged both the concept and the consequences of calling AI systems “friends.”
While several companies have begun embedding emotionally responsive bots into popular platforms, Hoffman argued that labeling them as friends distorts what friendship actually means — and chips away at how people understand real relationships.
The timing of his remarks coincides with a broader push by Meta, led by CEO Mark Zuckerberg, to integrate conversational AI into products like Instagram, WhatsApp, Facebook, and even wearable devices like Ray-Ban smart glasses. Zuckerberg recently framed AI companions as part of a solution to rising social isolation in the U.S., pointing to data showing that many Americans report having very few close friendships.
Still, Hoffman pushed back, saying the language surrounding these tools matters more than companies are willing to admit. For him, the issue isn't whether a chatbot can be emotionally engaging — it's whether pretending it's a peer misleads people about what connection truly involves.
In his view, a real friendship carries mutual responsibility and depth. A chatbot, no matter how advanced, lacks the ability to offer support in both directions. Hoffman emphasized that when one side in a relationship cannot grow, reflect, or hold the other accountable, calling it “friendship” undermines the human experience.
Rather than eliminating the use of AI in personal spaces, Hoffman favors clear communication and design boundaries. He pointed to more cautious implementations, like Inflection AI’s Pi assistant, which identifies itself as a “companion,” not a friend, and actively encourages users to build stronger ties with people in their lives.
As AI firms race to build tools that feel emotionally intelligent, Hoffman called for industry-wide norms and increased transparency around how these technologies are described and deployed. He urged companies, markets, and governments to step in before social definitions become permanently blurred.
The stakes, in his view, go beyond marketing language. He believes that by confusing synthetic interactions with real relationships, society risks weakening the emotional fabric that supports personal growth and connection.
Hoffman is not alone in voicing that concern. Earlier this month, OpenAI’s CEO Sam Altman addressed a similar issue during Senate testimony. Asked whether he’d be comfortable with his own child forming a deep attachment to an AI chatbot, Altman said no — arguing that while adults may seek comfort from machines, children need clearer boundaries and stronger safeguards.
As AI tools become more lifelike and persistent, industry leaders are facing new pressure to draw ethical lines. For Hoffman, drawing that line at friendship is essential — not just for clarity, but for protecting what it means to live among and rely on real people.
Read next: Apple’s App Store Tripled Revenue Since 2019, Now at $406B in U.S.
Reid Hoffman, who helped launch LinkedIn and has since become a leading investor in AI startups, voiced serious concern this week over tech’s growing attempt to position chatbots as human-like friends. His warning came during a conversation on the Possible podcast, where he challenged both the concept and the consequences of calling AI systems “friends.”
While several companies have begun embedding emotionally responsive bots into popular platforms, Hoffman argued that labeling them as friends distorts what friendship actually means — and chips away at how people understand real relationships.
The timing of his remarks coincides with a broader push by Meta, led by CEO Mark Zuckerberg, to integrate conversational AI into products like Instagram, WhatsApp, Facebook, and even wearable devices like Ray-Ban smart glasses. Zuckerberg recently framed AI companions as part of a solution to rising social isolation in the U.S., pointing to data showing that many Americans report having very few close friendships.
Still, Hoffman pushed back, saying the language surrounding these tools matters more than companies are willing to admit. For him, the issue isn't whether a chatbot can be emotionally engaging — it's whether pretending it's a peer misleads people about what connection truly involves.
In his view, a real friendship carries mutual responsibility and depth. A chatbot, no matter how advanced, lacks the ability to offer support in both directions. Hoffman emphasized that when one side in a relationship cannot grow, reflect, or hold the other accountable, calling it “friendship” undermines the human experience.
Rather than eliminating the use of AI in personal spaces, Hoffman favors clear communication and design boundaries. He pointed to more cautious implementations, like Inflection AI’s Pi assistant, which identifies itself as a “companion,” not a friend, and actively encourages users to build stronger ties with people in their lives.
As AI firms race to build tools that feel emotionally intelligent, Hoffman called for industry-wide norms and increased transparency around how these technologies are described and deployed. He urged companies, markets, and governments to step in before social definitions become permanently blurred.
The stakes, in his view, go beyond marketing language. He believes that by confusing synthetic interactions with real relationships, society risks weakening the emotional fabric that supports personal growth and connection.
Hoffman is not alone in voicing that concern. Earlier this month, OpenAI’s CEO Sam Altman addressed a similar issue during Senate testimony. Asked whether he’d be comfortable with his own child forming a deep attachment to an AI chatbot, Altman said no — arguing that while adults may seek comfort from machines, children need clearer boundaries and stronger safeguards.
As AI tools become more lifelike and persistent, industry leaders are facing new pressure to draw ethical lines. For Hoffman, drawing that line at friendship is essential — not just for clarity, but for protecting what it means to live among and rely on real people.
Read next: Apple’s App Store Tripled Revenue Since 2019, Now at $406B in U.S.