New American Legislation Could Hold Social Media Apps Responsible For Spreading Harm Through AI

The growing trend of AI in all aspects of the tech world is a reality that most of us cannot ignore. But a new report is shedding light on how a recently introduced legislation in the US could mean trouble for social media apps.

The news comes as one Republican and another Democrat senator spoke about the new law that may keep social media apps liable for any harmful content produced through AI technology.

As mentioned on the webpage of Hawley, such a law would eliminate any form of immunity that’s applicable currently on top social media apps that are using AI tools to form content. Therefore, this means we’re going to be seeing greater levels of protection from such content that’s intended for the sole purpose of harm through AI.

In case you were not already aware, there are plenty of deepfakes in the industry that have grown in figures due to AI. They try to portray an image of being real or being someone else but in reality, that’s not true. Moreover, their popularity continues to explode as we speak.

Regular individuals who may have done nothing wrong or never uttered a word could be dragged into a series of consequences that would destroy their world. But that won’t be the case anymore as all such people would be accountable.

The previous immunity that apps got under Section 230 paved the way for so many apps to put out content that was destructive and just biased in nature with no checks and balances in place. As long as the views and engagement were being amplified, everything was going well for the app and they were least bothered about anything else.

Hence, we can see why this new push toward stricter regulations is being talked about as Senators try to get ahead of the game and trends related to AI. There seems to be a huge risk linked to misinformation spreading and deepfakes moving along such platforms.

But the bill is still a little confusing to some of us as the concept of liability is not mentioned too clearly. When users produce images through the DALL-E imager generator model, they’re going to put them on other platforms like Twitter. But does that mean Twitter would be held accountable if something went wrong? Similarly, would the creators of this end product be the ones responsible? After all, the picture was produced by them.

Other than that, the details need to have a clear recognition of which tools these apps are willing to come up with. As we speak, they’re doing trials with generative AI and users are producing content across various apps too.

When laws have to do with distribution, every platform would need to be very transparent to address the matter. And in terms of creating posts, it might also stop them in terms of track development through the AI scheme.

Right now, it appears like things are going to get awfully difficult to make the bill gain acceptance, considering how there’s a lot at stake and with the growing trend of generative AI. a lot needs to be taken up for consideration too.

But whatever the case may be, one thing is for sure. Such news is making us realize that the government is definitely concerned about AI and is ramping up efforts to reduce risks in the future.

In this regard, you’ll definitely be getting a lot of AI regulation as we step forward and also new ways as to how content gets managed in the future.


Read next: New US Intelligence Report Raises The Alarm Against Online Commercial Data That’s Being Used For Spying
Previous Post Next Post