Twitter Unveils Its Latest Transparency Report, Harkening Back To The Company's Trials And Tribulations Throughout Early To Mid 2020

The most recent update in Twitter's series of transparency reports was recently published, taking a look back at how the app dealt with everything tossed its way from January to June of 2020.

This installment makes up the 17th in Twitter's series of reports that are geared around giving users an honest look at the application's attempts at running and management. An Insights rundown, featured on their blog, breaks down all the efforts developers are making towards keeping the platform a safe and friendly place for everyone. Which is rather well-timed because 2020 gave a lot of fodder to work with.

First things first, as has more or less become commonplace when discussing a tech company's impact on their userbase, how did Twitter handle the COVID-19 pandemic? Launching a campaign against misinformation, policy changes for the platform were enacted in March of that year. Via the use of algorithms and extensive machine learning, 4,658 accounts received strikes from developers themselves, while AI dealt with over 4.5 million accounts participating in spamming and the like.

Spam in general was doubled down on by the company, as a reported 54% increase in anti-spamming measures was observed throughout the time period. This, combined with the 16% increase in spam reports, is attributed to the revamped measures taken during the pandemic.

Twitter also showed signs of their anti-terrorism policies showing through, albeit to a much smaller extent than the COVID misinformation ones. A 5% increase in accounts being banned for breaking regulations was noted, and 94% of those accounts were also identified for further anti-terrorist reassurance. Twitter has also collaborated with the Global Internet Forum to Counter Terrorism, leveraging their database in attempts to ID online extremists and the like.

Child sexual exploitation (CSE) was also cracked down on by the platform, observing a 68% increase in policy enactment. Material pertaining to CSE was then further reported to the National Center for Missing and Exploited Children in attempts to help the highly unfortunate victims of such heinous action.

Let's do a quick rundown of other notable entries. Content promoting self-harm and suicidal tendencies fell by a reported 49%, while harassment and unsolicited abuse on the platform dove by 34%. In an unfortunate bit of news, DMCA copyright strikes became more frequent on the platform by 15%. Restrictions on private information sharing rose by 68%, which is a very troubling statistic since Twitter didn't clarify what the 68% raise meant, or what classifies as private information sharing. Non-consensual nudity was also countered harshly, with a reported 58% decrease in such content observed. It is a sickening pity that NCN and CSE are still prevalent enough on the platform to warrant inclusion on the blog post, however.

In other news, the platform received 42,220 legal demands from across 53 countries to remove content from specified accounts (which amassed to be 85,375 in number). 96% of all requests originated from 5 countries: Russia, Turkey, Japan, South Korea, and India. Seems like Asia's very particular about what's allowed online and, more specifically, what isn't.

Twitter, in a series of closing remarks, has noted that the pandemic has affected their policy implementations throughout 2020. However, with the development team having finally adjusted to working under social distancing restrictions and the new lay of the land, improvements are bound to come. The company further on pledges to keep these biannual reports even more transparent in an effort to maintain userbase satisfaction. For future reference, Twitter's even announced a multi-year initiative in order to hone in on more consistency throughout these reports, although details on what that entails were not elaborated on.

Previous Post Next Post