The European Union has confirmed how regulators will now ban using AI systems that pose high levels of risk or danger as per the latest AI Act.
February 2 was outlined to be the latest deadline on this front, featuring a comprehensive framework on what’s acceptable and what’s not. This law came into sight on August 1 and now its list of compliance deadlines is taking center stage.
The details outlined here are curated to include several different use cases where the technology could interact with different people. For instance, it might be a consumer application or a physical environment.
Through this latest bloc technique, four risk levels were identified. This includes minimal risk which will not feature any kind of regulatory oversight. Then limited risk is designed to entail customer service chatbots featuring light-weight regulatory oversight. In the end, high risk is for those featuring AI for serious life-changing matters including health. These will in turn have to deal with strict regulatory oversight. There is similarly a category for unacceptable risk that focuses more on compliance requirements that would be eliminated as a whole.
There were certain examples provided about behaviors deemed unacceptable. This includes social scoring, manipulating decisions, exploiting user vulnerabilities, and predicting crimes depending on physical appearance.
AI using biometrics to interfere with an individual’s characteristics such as gender and collecting real-time data in public places for the sake of law enforcement is similarly not acceptable. Any AI inferring emotions in a public place setting and those scraping pictures online through security devices such as cameras will similarly be eliminated.
Any company found guilty of being involved in the above AI applications will have to deal with fines, no matter where they might be located. This means it could give rise to bigger fines that go up to $36M. As of now, no date for when the fines would implemented was shared.
However, all companies need to be compliant by February 2 and the next major compliance deadline would be August for this year. This way, users will be aware of competent individuals and how enforcement provisions will come into play.
Some deem this latest deadline of February 2 as a formality. More than 100 different companies laid down a voluntary pledge to begin applying AI Act principles into this application. Signatories such as Amazon, OpenAI, and Google were said to be committed to highlighting any risks inside the systems that they feel to the greatest degree. However, it was interesting to note that several others such as Mistral, Apple, and Meta skipped being involved in this pact and decided not to sign.
So what does this mean? The companies that didn’t sign vow not to engage in such practices anyway. They are already mindful of the actions and which cases are prohibited as per experts.
This latest AI Act will not be working independently in the region. It will interact with the new AI Act but the question on many people’s minds is how the different laws will fit together and work as one. After all, they all have their own set of defined clauses coming into existence in one place.
Image: DIW-Aigen
Read next: OpenAI Announces New AI Agent Designed to Assist People Carryout In-Depth and Complex Research
February 2 was outlined to be the latest deadline on this front, featuring a comprehensive framework on what’s acceptable and what’s not. This law came into sight on August 1 and now its list of compliance deadlines is taking center stage.
The details outlined here are curated to include several different use cases where the technology could interact with different people. For instance, it might be a consumer application or a physical environment.
Through this latest bloc technique, four risk levels were identified. This includes minimal risk which will not feature any kind of regulatory oversight. Then limited risk is designed to entail customer service chatbots featuring light-weight regulatory oversight. In the end, high risk is for those featuring AI for serious life-changing matters including health. These will in turn have to deal with strict regulatory oversight. There is similarly a category for unacceptable risk that focuses more on compliance requirements that would be eliminated as a whole.
There were certain examples provided about behaviors deemed unacceptable. This includes social scoring, manipulating decisions, exploiting user vulnerabilities, and predicting crimes depending on physical appearance.
AI using biometrics to interfere with an individual’s characteristics such as gender and collecting real-time data in public places for the sake of law enforcement is similarly not acceptable. Any AI inferring emotions in a public place setting and those scraping pictures online through security devices such as cameras will similarly be eliminated.
Any company found guilty of being involved in the above AI applications will have to deal with fines, no matter where they might be located. This means it could give rise to bigger fines that go up to $36M. As of now, no date for when the fines would implemented was shared.
However, all companies need to be compliant by February 2 and the next major compliance deadline would be August for this year. This way, users will be aware of competent individuals and how enforcement provisions will come into play.
Some deem this latest deadline of February 2 as a formality. More than 100 different companies laid down a voluntary pledge to begin applying AI Act principles into this application. Signatories such as Amazon, OpenAI, and Google were said to be committed to highlighting any risks inside the systems that they feel to the greatest degree. However, it was interesting to note that several others such as Mistral, Apple, and Meta skipped being involved in this pact and decided not to sign.
So what does this mean? The companies that didn’t sign vow not to engage in such practices anyway. They are already mindful of the actions and which cases are prohibited as per experts.
This latest AI Act will not be working independently in the region. It will interact with the new AI Act but the question on many people’s minds is how the different laws will fit together and work as one. After all, they all have their own set of defined clauses coming into existence in one place.
Image: DIW-Aigen
Read next: OpenAI Announces New AI Agent Designed to Assist People Carryout In-Depth and Complex Research