Twitter Reveals Updates That No Longer Crop Photos Across Mobile Devices, And Also Rolls Out Prompts Discouraging Harmful And Offensive Language Online

Twitter has recently put up two updates to its interface. The first allows users to finally view photos in their original aspect ratio, instead of being cropped. The second, of a much different nature, is rolling out prompts that will be attached to potentially harmful or offensive tweets.

Cropped photos have been an almost constant source of annoyance for many Twitter users since perhaps the beginning. While photos large and small can easily be viewed on desktops, other platforms such as mobile devices fell short. It'd often prove to be an irritating experience, having to pause one's scrolling to click on a photo that may or may not prove to be relevant. Perhaps what proved to be extra infuriating to users is that images of all different sizes had to be cropped down to a 16:9 aspect ratio. That meant that images which were traditionally small, but simply a bit too wide or tall would still get the axe.


Another rather alarming issue with the cropping algorithm brought up by users was its, and there's no delicate way to put this, tendency towards racism. As bizarre as it sounds, Twitter accidentally launched a discriminatory AI that clearly had not been tested enough. The AI's job was to crop photos in such a way that faces would still be visible while scrolling down. However, many users chose to experiment with this and found out that black people were actively deprived of any such privilege. Their faces would either get cropped, or preference would be given to neighboring white faces. Which is, well, yikes.

Well, that sense of irritation is soon to be whisked away. Twitter's latest update now allows images of all sizes to be uploaded across all iterations of the platforms. Tablet and mobile frequenters can now easily access photos without worrying about missing out on content, or racial bias! Seriously, even rereading that last point absolutely stumps me.

Coming to our second update, which ironically concerns tuning down racism from users, Twitter is rolling out a series of discouraging prompts. The prompts will flash up whenever users on the platform will attempt to post tweets containing language that may be of a harmful or derogatory nature. The prompt will, in the interest of free speech, not ban the tweet. It will instead encourage the user to reconsider their phrasing and motive before posting.


While such a move, which comes across as more akin to a light tap on the shoulder, seems ineffective, studies suggest otherwise. Psychological reports state that even such light discouragement and invitations to reconsider can often cause users to outright not post harmful content. Twitter's own studies in the matter revealed that 34% of its sample population chose to revise or not post tweets containing language highlighted by the algorithm. Now, let's hope that at least this AI can manage its bias a bit better than the last one.

Read next: Twitter has bought Ad blocking service Scroll to strengthen its subscription plan
Previous Post Next Post