Tech giant OpenAI is in the hot seat again after a leading regulator from Austria called out the company for violating the GDPR.
This was in regard to its popular ChatGPT tool which was accused of portraying fake data about people online.
The watchdog further went on to reveal how the leading non-profit rolled out complaints against the organization yesterday and quoted that the AI tech giant was not doing enough to ward off misinformation that ChatGPT rolls out about people online.
As per the firm, several AI models were in the process of hallucinating material and that’s not something new that we’re seeing with more AI trends picking up the pace.
Several chatbots have always failed to follow the guidelines rolled out by the EU when they process information about people and that can be worrisome for obvious reasons.
The watchdog boldly delineated how systems that can’t roll out accurate or correct material on a variety of subjects including individuals must be thoroughly checked for transparency because it’s a serious offense that warrants immediate attention.
The legal guidelines that need to be followed are pivotal for the company’s functioning and a recent case was similarly highlighted where OpenAI just did not wish to fulfill the requirements of one user who wished to have fake data about themselves on the tool erased including their date of birth which was published incorrectly.
The company simply wrote how it wouldn’t be able to fix that and it raised many questions on people’s minds in terms of what’s next.
Authorities are being asked now to further investigate the ordeal on this front to make sure accuracy is of top quality and a serious priority. If not, the LLM’s would be subject to serious fines as a leading punishment.
The GDPR has managed how users’ personal data should be as correct as possible while users must also have full access to any data about themselves that is stored in the large databases of the company. And if that’s not happening, then bigger fines would be generated in the firm’s direction to ensure full compliance with the GDPR.
The GDPR has also spoken about data being as accurate as possible and their source of origin being well-defined. Those who have the right to request the removal of fake data must be entertained as mentioned in the law. And if any kind of failure to do so occurs, the result is huge penalties that go up to the 4% mark of the yearly turnover.
As far as OpenAI is concerned, the firm cannot disclose which data origin source was used to give out the information to its ChatGPT. The company similarly mentioned how it’s struggling in terms of amending errors which it feels is an area that continues to be researched today.
They hope and vow to abide by all regulatory laws including the GDPR to ensure a safe and fruitful working experience for all of its users. But with cases like these constantly trying to bring the firm down, it’s not a pretty sight for obvious reasons.
Clearly, it’s not the first time that we’ve heard more on this front. A lot of complaints continued to be lodged in terms of the company's failure to respect the transparency rules mentioned online. But with such cases coming into play, we can sure bet the organization would be on the tip of its toes to make sure misinformation is limited to a bare minimal amount.
Read next: Meta Announces More Staff Layoffs, This Time In Oversight Board To Better Streamline Work
This was in regard to its popular ChatGPT tool which was accused of portraying fake data about people online.
The watchdog further went on to reveal how the leading non-profit rolled out complaints against the organization yesterday and quoted that the AI tech giant was not doing enough to ward off misinformation that ChatGPT rolls out about people online.
As per the firm, several AI models were in the process of hallucinating material and that’s not something new that we’re seeing with more AI trends picking up the pace.
Several chatbots have always failed to follow the guidelines rolled out by the EU when they process information about people and that can be worrisome for obvious reasons.
The watchdog boldly delineated how systems that can’t roll out accurate or correct material on a variety of subjects including individuals must be thoroughly checked for transparency because it’s a serious offense that warrants immediate attention.
The legal guidelines that need to be followed are pivotal for the company’s functioning and a recent case was similarly highlighted where OpenAI just did not wish to fulfill the requirements of one user who wished to have fake data about themselves on the tool erased including their date of birth which was published incorrectly.
The company simply wrote how it wouldn’t be able to fix that and it raised many questions on people’s minds in terms of what’s next.
Authorities are being asked now to further investigate the ordeal on this front to make sure accuracy is of top quality and a serious priority. If not, the LLM’s would be subject to serious fines as a leading punishment.
The GDPR has managed how users’ personal data should be as correct as possible while users must also have full access to any data about themselves that is stored in the large databases of the company. And if that’s not happening, then bigger fines would be generated in the firm’s direction to ensure full compliance with the GDPR.
The GDPR has also spoken about data being as accurate as possible and their source of origin being well-defined. Those who have the right to request the removal of fake data must be entertained as mentioned in the law. And if any kind of failure to do so occurs, the result is huge penalties that go up to the 4% mark of the yearly turnover.
As far as OpenAI is concerned, the firm cannot disclose which data origin source was used to give out the information to its ChatGPT. The company similarly mentioned how it’s struggling in terms of amending errors which it feels is an area that continues to be researched today.
They hope and vow to abide by all regulatory laws including the GDPR to ensure a safe and fruitful working experience for all of its users. But with cases like these constantly trying to bring the firm down, it’s not a pretty sight for obvious reasons.
Clearly, it’s not the first time that we’ve heard more on this front. A lot of complaints continued to be lodged in terms of the company's failure to respect the transparency rules mentioned online. But with such cases coming into play, we can sure bet the organization would be on the tip of its toes to make sure misinformation is limited to a bare minimal amount.
Read next: Meta Announces More Staff Layoffs, This Time In Oversight Board To Better Streamline Work