Top Tech Giants Including OpenAI, Microsoft, And Google Pledge To Follow White House’s New AI Principles

The world of Generative AI might seem to have no checks and balances in store with top tech firms failing miserably in terms of regulation.

However, new reports have confirmed how leading companies in the tech sector are now pledging to follow all the new AI principles outlined by the White House. This includes firms such as OpenAI, Microsoft, and Google.

The news comes after the current Biden administration stressed on the importance of better safety guidelines to ensure proper AI regulation. They hoped most leaders in the industry would play an active role and ensure innovative AI really helps users and not the opposite.

All these firms have agreed by the will to follow the rules laid down via a new agreement that was set by Congress. The latter is preparing to pass new rules for proper AI regulation.

Right now, the goal is to ensure responsible use as so many officials hope AI can provide less harm and more benefits across the board while not comprising user safety, at the end of the day.

We even saw the country’s VP meet up with the heads of these tech giants and explain to them how crucial it was to make responsible decisions for the community and how they had a lot of responsibility resting on their shoulders. They also hoped that all AI products were safe as well as secure in terms of use.

Then in June, President Biden met up with the CEOs again to reiterate the same message and how the top priority for all should be user privacy and security.

Now, coming down to the new principles being discussed, we’re seeing talk about measures linked to security and social well-being. The list has a total of eight suggestions that we thought would be worth a mention.

For starters, this entails allowing experts to experiment more with models and see if they come under the bad behavior category. Next, it’s linked to investments coming into the cybersecurity domain, followed by encouraging third parties to work further and explore various vulnerabilities.

Flagging any risks to society that entails bias and wrong use must be mentioned as there needs to be more focus on better research regarding risks linked to AI. Meanwhile, there would be a greater need for sharing data and trust with responsible organizations as well.

Some other top recommendation laid down in the framework of suggestions is adding watermarks to content if it’s produced through AI means. Similarly, making use of only the best AI models out there to combat huge issues in society today.

It is very clear how there is a lot of work to be done and how there are constant agreements in place in terms of undermining the difficulty related to officials being up to date with developments in the AI world.

In the same way, a few bills were rolled out regarding AI regulation and how firms would be prevented from making use of protections to prevent liabilities related to dangerous AI content.


Read next: Meta Faces Criticism As Its New Open-Source Llama AI Model Comes With Restrictions
Previous Post Next Post