OpenAI Accused Of Remaining Silent On Major Security Breach Where Hackers Accessed Internal Messaging Systems

The New York Times has just released a new report that speaks about the makers of ChatGPT undergoing a massive security breach last year.

The report delineated how OpenAI opted to remain hush on the matter and avoid discussions to prevent negative publicity about the company. Moreover, other details shed light on how internal messaging systems were hacked and sensitive credentials were stolen.

While employees were made aware of what was going on by the management, they were told to stay silent and prevent the spreading of the word to the general public and other law enforcement agencies.

Shocking information went on to highlight how the hacker stole more data from chats conducted through online forums linked to the workers at OpenAI. The discussions were very confidential and related to new technologies getting launched soon.

It’s just very fortunate that the company has confirmed that hackers were unable to break inside systems where several GPT models were receiving training as that would have been a serious issue for the firm.

Meanwhile, two sources revealed how employees were already concerned about how many similar attacks were being carried out in places like China and the aim seemed to be linked to stealing AI tech. Hence, they voiced opinions on how the issue could lead to a drastic national security issue.

However, the response that some employees received on this front is shocking because it was almost as if the company did not care or didn’t hold security as its top priority, leading to many raising questions about the motive at stake here.

One ex-tech manager shared a memo on this front related to the firm’s board that delineated how the organization was just not doing enough to curb such matters from arising. This might cost the company in the future as theft of confidential secrets or ideas by foreign threat actors means serious consequences that OpenAI would have to withstand.

The news about OpenAI undergoing such a breach and the fact that it led to a disunion of workers at the organization just goes to show what sort of issues the firm deals with daily.

More details were unveiled by the ex-manager who goes by the name Leopold Aschenbrenner. He alluded to how such matters of concern were alarming through a recent appearance on a podcast.

OpenAI terminated his contract when he was found to leak data outside the organization but he argued that this was not the case. Instead, such dismissal had a political inclination behind them.

In the past, watching OpenAI go through a host of disagreements linked to superalignment and other means like Sam Altman getting ousted at his own firm by the board is really eye-opening about what else might be taking place that many of us are not aware of.

Other reports have shown how leading AI researchers have simply left the firm because they feel the board does not keep safety and security as a priority and the fact that AI might be threatening to the world yet nothing has been done to curb the alarming situation had researchers questioning the end objective.

AI is getting more modern and hence its capabilities are similarly rising. This can have a significant impact on the future, even if experts feel the threat is not too great at this moment in time.

Image: DIW-Aigen

Read next: Samsung Estimates More Than 15-Fold Rise In Q2 Profits As AI Boom Lifts Earnings
Previous Post Next Post