38% Of AI Governance Tools In Use Are Ineffective And Problematic, New Study Proves

AI governance or a lack of it has been a trending topic for debate, ever since the uprising of Generative AI took center stage.

But what if we told you that a third of the tools serving this process were actually deemed faulty or of poor use?

Thanks to a new study by the World Privacy Forum, we’re seeing some very interesting facts published in this regard. The research comprised a comprehensive review of the top 18 governance tools being used to evaluate AI content. And 38% of those were said to include faulty fixes.

The fact that the offerings designed to evaluate AI systems for their fairness and effectiveness are faulty themselves truly is an eye-opening experience as it makes one wonder what’s been going on for so long and how it went unnoticed until now.

Some of the concerns have to do with poor systems in place for quality assurance purposes with serious glitches in the system. These are said to be unfit when they’re utilized outside of the real use that they were designed for.

Additionally, a large figure for tools was launched by organizations that one would never second guess. After all, they’re not only known for their reputation in the tech industry but also for the same ones producing the AI systems that such tools are designed to evaluate.

Common names in the list include Google, IBM, Microsoft, amongst others.

One leading example in this regard proved how the famous AI Fairness 360 tool by IBM was highlighted by the country’s Accountability Office for being great for gauging ethical principles and factors like transparency, safety, fairness, and accountability. However, as per the study carried out, the tool is growing in terms of criticism for its effectiveness in doing the job correctly.

The head of the World Privacy Forum added that the list of AI governance offerings continued

to move in a limping direction. A major reason had to do with how no form of established protocol was in place to ensure the right quality was maintained or assured when in use. And when there’s no quality control in place, the issues just multiply.

One of the major concerns is linked to how there are no instructions on how to use it effectively or what the actual use is for. Similarly, no data is present in terms of what context it’s designed for while no form of conflict of interest was found either. What is even more concerning for the researchers is that one tool designed for a certain type of context was used for others as well, raising the alarm as there was no standard followed.

This whole time, the public is being misled to assume the tools are doing their jobs right and are ensuring their safety is in check. But in reality, it’s far from that, as confirmed by the study.

And to see such tools generate a false sense of security is definitely not getting appreciation after this study went public.

Even since the AI Act by the EU was passed, followed by the launch of the AI Executive Order in the US, we’re seeing more countries try to follow in their footsteps and launch similar toolsets for the same intended purpose.

With alarming research of this kind on the rise, it paves the way to highlighting so many flaws in the system with an abundance of room for improvement, especially as the new year 2024 dawns upon us all.

The OECD continues to serve as the global gatekeeper for keeping such governance tools in check and they’re already admitting how there’s an issue to make things better.

Image: DIW-AIgen

Read next: OpenAI Lays Down New Framework To Address Safety Concerns For Its Most Advanced Tools
Previous Post Next Post