EU’s Landmark AI Act Reaches A Provisional Agreement After Being Hailed As A Global Benchmark For AI Governance

The landmark AI Act by the EU has just passed its first round of serious discussions and negotiations, terming it to be a provisional agreement in the region.

The law is becoming famous for being a global benchmark for other nations in terms of AI regulation which many feel is a huge need at this moment in time. And rightly so, considering the amount of debate that surrounds the technology.

Lawmakers united in Brussels this past week and proposed how the AI Act was going to serve as a great example for others to follow suit and possibly pass laws that are very similar to this.

As per the latest press release on the subject, we are hearing more about how the negotiators rolled out obligations that would result in high-impact GPAI systems, meeting all kinds of parameters and benchmarks that would ensure AI is safe to use for users in the region.

This entails tests to ascertain risk, launching incident reports, ad tests, and so much more. The goal is simple, users need to be protected at all times and the technology needs to be useful and not overpowering to an extent that it endangers the human race.

Another major factor of this landmark law is transparency is being kept as the highest priority of them all. There are plenty of detailed summaries in this regard of how content keeps getting used for the sake of training AI models. And that’s something that big AI firms like OpenAI have denied, time after time regarding their ChatGPT endeavor.

One more element in question is in regards to how citizens need the right to roll out complaints regarding AI systems and will continue to attain explanations in terms of the decisions outlined so far like how high of a risk it can be and how it could affect their rights as well.

The announcement made to the press regarding the new law failed to get into the details of how it works or what the set parameters would be. But there were some notes regarding framework as well as fines, if firms opt to break the set regulations outlined.

They are very variable and depend on which part of the law was violated and the size of the organization in question. This could range anywhere between 35 million euros which is a whopping 7% turnover from the firm’s revenue to 7.7 million euros which happens to be a 1.5% turnover of the overall revenue.

There were a few apps where AI happens to be barred such as those linked to scraping facial pictures through CCTV data and even categorizing the offense depending on some uniquely sensitive features like race, political opinions, gender, ethnicity, workplace recognition, education status and the presence of social scoring across the board.

The last two of those are linked to how human behavior could end up limiting users’ free will or exploiting their vulnerabilities. Such rules have also included safety factors put into place and any exemptions that could be attained for instance from law enforcement officials like if they wish to get biometrics or look for evidence linked to recordings and so on.

It seems like there would be a final deal expected very soon and this comes before the year ends. But as far as the implementation of the AI law is concerned, that won’t be happening until the year 2025, which is also a very early estimate by experts.

The initial draft for the landmark AI Act was first made public in 2021 and during that time it was delineated as being a big deal and really differentiating in terms of what would be included as AI and what rules would end up regulating the technology for different member states.

But we need to remember how this is just limited to a provisional agreement. There are still going to be more rounds where negotiations and deliberations will occur and possibly more changes are to come and that would add further delay.


Read next: The top countries paving the way for women in STEM
Previous Post Next Post