Social media has long been blamed for amplifying anger and hostility, but a new line of research suggests that the real source of toxicity may not be our opponents at all. It often begins with the people we already agree with.
A recent study from the University of Haifa found that users who see rude or intolerant posts from their own political side are far more likely to mirror that behavior than when they encounter hate from the opposing side.
Toxicity That Feels Familiar
The researchers analyzed millions of posts shared on X (formerly Twitter) in Israel during 2023, a year marked by deep political division. They wanted to understand how toxic behavior spreads and whether it moves differently inside or across political groups. Their focus wasn’t only on how users express hostility, but also on the kind of toxicity they adopt. They separated two dimensions of harmful speech: impolite style, which covers rude tone or foul language, and intolerant substance, which involves messages that demean social or political groups.
Across more than seven million tweets, one clear pattern emerged. People were significantly more likely to post toxic messages after seeing such behavior from their own side. This “ingroup contagion” proved stronger than any reaction to insults from opponents. When users saw hostility from the other side, they often responded defensively but not as intensely. The strongest predictor of new toxic posts was exposure to toxicity that came wrapped in familiarity.
The Pull of Belonging
The finding reflects a deeper social mechanism. People online do not only communicate as individuals; they perform as representatives of their group. When members of the same political community use harsh language, others interpret that tone as part of the group’s identity. To fit in, they copy it. It’s a form of social mirroring shaped by loyalty, not simply by outrage.
On platforms built around likes, replies, and visibility, such behavior brings quick social rewards. Users who echo the style of their peers can gain approval and attention. Over time, that cycle of validation turns hostility into habit. The researchers call this dynamic a “contagion,” not because people lose control, but because social media design makes imitation effortless.
Where Algorithms Meet Identity
What makes this process powerful is how platforms amplify identity signals. Algorithms that prioritize engagement naturally favor emotional and confrontational content. As posts from one’s own side fill the feed, the distinction between passionate support and open hostility blurs. Even moderate users may start matching the tone they see most often.
Interestingly, the study found that echo chambers (online spaces filled only with like-minded users) were not the worst environments for contagion. People surrounded by uniform opinions already hold firm views and feel less need to prove their loyalty. Toxicity spread faster in mixed networks, where users are exposed to both allies and opponents. The friction of that diversity appeared to intensify imitation within groups.
From Rudeness to Intolerance
The research also revealed a subtle but worrying shift. Exposure to mild forms of impoliteness, such as sarcasm or insults, sometimes led users to post not just rude comments but openly intolerant ones. In other words, small breaches of civility could snowball into expressions that reject or devalue other social groups. What begins as casual frustration can evolve into language that undermines democratic norms of respect and inclusion.
Breaking the Loop
While the study focused on political communication in Israel, the patterns it uncovered apply broadly. Across digital platforms, people tend to model their tone on the behavior of those they identify with. That human impulse is what keeps communities coherent - and what makes them vulnerable to turning sour.
Understanding this dynamic shifts responsibility away from blaming algorithms alone. Social media’s design does encourage hostility, but much of the toxicity that circulates online grows out of ordinary acts of imitation. Each time users echo the anger of their peers, they reinforce the idea that aggression is part of belonging.
The next time a heated post from a familiar account flashes across the screen, it may help to pause before responding in kind. What feels like standing with one’s side might simply be feeding the very cycle that keeps social media meaner than it needs to be.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Search That Knows You: Google’s AI Era Keeps Ads Alive in a More Personal Web
