Facebook Is Launching New Features In Order To Help Admins With Group Moderation, Even Utilizing The Power Of AI To Do So

Facebook is introducing a slew of new tools for group moderators in order to moderate conversations and general discourse in their groups.

Facebook's Groups could be considered an important precursor to much of the niche-based online conversations, individuals on the internet engage with today. Before the time of subreddits, yet still more nuanced than the old AOL chatrooms, Facebook Groups have been an easy way of people who share interests to interact with each other as well as the community as a whole. As more and more people became a part of this conversational wave, moderators and admins started taking their online positions more seriously, in order to create a peaceful, conducive environment. The ability to ban individuals, make groups public or private, establishing ground rules, screening questions for new users, these are now all essential to an admin's group moderation. Is it tiresome? Yes, definitely, but Facebook's looking to make administration a bit easier on the good old mods.

As part of an update, Facebook's bringing a few new features for Groups moderator. One of these is the presence of an all-new dashboard labelled the "Admin Home", which will feature all group settings, features, and the like in one convenient spot. This way, admins and mods need no longer click between different options in the Settings menu to fix problems: they've got it all figured out in one spot!

A new moderation tool uses the power of AI and machine learning to highlight potential conflicts that are occurring on a mod's group. Labelled "conflict alerts", the AI will, via evaluating comments and running them across it's own database of detrimental and hurtful comments, note whether or not a particular conversation is getting unnecessarily heated and warn the admins and mods. Those individuals can then, upon their own convenience and leisure, choose to either restrict the users involved, or ban comments from the post. While this feature is still under testing, it is implied that due to the AI's learning nature, it will continue to be more nuanced and learn what counts as destructive discourse within some groups as opposed to others.

This learning might end up being what makes or breaks the conflict alerts, otherwise the system may prove counter-intuitive, constantly reporting to the admins conversations that pass as nothing more than banter in specific circles.

Read next: Mark Zuckerberg Falls Down the Glassdoor Ranking List of the Most Likable CEOs Based on Employees Survey for the First Time Since 2013

Previous Post Next Post