OpenAI Implements New Strategies to Improve Safety of Its Generative AI Systems

The makers of ChatGPT want the world to know that user safety and security are two primary goals at the moment when it comes to AI systems.

The company was seen speaking more on this front and also shed light on how training for its next AI model has also begun, putting a lot of speculations to rest.

During the early hours of the morning, it was mentioned how its board of directors rolled out a new Safety and Security Committee which would examine how the firm’s safety procedures come into play in terms of product development.

The news was published through a blog post that stated more details on this front and how the tech giant’s CEO would be included in that certain committee. Other members would be those from the company’s board as well as employees it deems to be a powerful part of the firm like the chief scientist and its leading member for security affairs. Meanwhile, a host of other leading experts would also be contacted from outside the firm which would attain the role of consultants for the tech giant.

The blog post stated how the initial task right now had to do with evaluating the processes running in the company and what safeguards were in place for three months. When that period is up, the committee will share more recommendations with all members of the board. After getting a comprehensive review on this front, the AI giant would go public with all of the findings and which recommendations were adopted that are consistent with user security.

The news about this new committee coming into play has been in light of the shocking discovery that the firm had to disband its superalignment team at the start of this month.

This was designed to better provide means to ensure humans remain at the forefront and AI is unable to supercede them as the fear of humans getting extinct due to AI being smarter is very real.
As one can expect, the news left a lot of questions in people’s minds including what steps the company was doing to prevent this from happening.

Now, we’re seeing how those fears are finally being put to rest with this new committee coming into play.

But on the other hand, news regarding the training of a new frontier model is also not being taken lightly. The company vowed that this new project would bring to light a host of capabilities linked to their AGI goals. This would all be done ensuring security remains the top priority at all times.

At the start of this month, the tech giant revealed a new launch of its latest GPT-4 model called GPT-4o. It’s certainly more realistic in its approach, featuring real-life voices and interactions to make users feel more comfortable that they’re talking to something very similar to another human and not a machine. And the best bit is that it’s free for all.

Image: DIW-Aigen

Read next: How Does Google Rank Content? New Trove Of Leaked Documents Unveil The Company’s Search Algorithm Workings
Previous Post Next Post