Ofcom is back in the spotlight, this time asking tech platforms to go further in limiting how harmful content spreads online. A consultation opened this week lays out the regulator’s next move under the UK’s Online Safety Act, and it’s not just about blocking posts, it’s about how they spread.
The plan includes new checks to slow or stop the viral distribution of illegal material. Platforms might need to build tools that catch content before it gains traction, not after. Some of this would rely on AI, the same systems that fuel engagement could be turned around to look for fraud, self-harm, or criminal material.
There’s more. Livestreaming is under scrutiny again, especially where children are involved. Ofcom wants platforms to stop strangers from sending virtual gifts to kids, or saving their streams for later viewing. Some livestreams may even need real-time flagging tools if the broadcast puts someone at risk.
These rules wouldn’t hit every site the same way. Larger platforms, the ones with reach and risk, would likely face tougher demands, like proactive scans for abusive material or grooming signs.
Changes have started already. YouTube plans to raise its livestreaming age limit to 16. TikTok did something similar back in 2022 after people noticed kids going live from refugee camps and asking for donations.
Still, not everyone’s convinced this goes far enough. Some campaigners say these proposals look more like patches than reform. There’s a growing call for a stronger legal push that forces platforms to design safer systems from the beginning, not fix issues after harm occurs.
The consultation runs until 20 October. Ofcom says it wants to hear from companies, law groups, charities, and regular users. What comes out of this will shape how the UK deals with digital safety — not just next year, but long-term.
Notes: Image: DIW-Aigen. This post was edited/created using GenAI tools.
Read next: Proton Is Taking Apple to Court Over How the App Store Works
The plan includes new checks to slow or stop the viral distribution of illegal material. Platforms might need to build tools that catch content before it gains traction, not after. Some of this would rely on AI, the same systems that fuel engagement could be turned around to look for fraud, self-harm, or criminal material.
There’s more. Livestreaming is under scrutiny again, especially where children are involved. Ofcom wants platforms to stop strangers from sending virtual gifts to kids, or saving their streams for later viewing. Some livestreams may even need real-time flagging tools if the broadcast puts someone at risk.
These rules wouldn’t hit every site the same way. Larger platforms, the ones with reach and risk, would likely face tougher demands, like proactive scans for abusive material or grooming signs.
Changes have started already. YouTube plans to raise its livestreaming age limit to 16. TikTok did something similar back in 2022 after people noticed kids going live from refugee camps and asking for donations.
Still, not everyone’s convinced this goes far enough. Some campaigners say these proposals look more like patches than reform. There’s a growing call for a stronger legal push that forces platforms to design safer systems from the beginning, not fix issues after harm occurs.
The consultation runs until 20 October. Ofcom says it wants to hear from companies, law groups, charities, and regular users. What comes out of this will shape how the UK deals with digital safety — not just next year, but long-term.
Notes: Image: DIW-Aigen. This post was edited/created using GenAI tools.
Read next: Proton Is Taking Apple to Court Over How the App Store Works