Meta’s New AI-Powered Fact Checker Is Designed To Verify Entries On Wikipedia

Wikipedia is definitely one of the greatest sources to attain information online. But what if we told you that it’s also very prone to errors that often go unnoticed by many.

Thankfully, Meta plans on tackling such issues with the development of a new fact-checker that’s powered by AI technology.

If you go down memory lane, Wikipedia was blasted for allowing 27,000 entries from one US teenager who submitted in a language that very few could identify with. And that’s when it was realized that while this tool is awfully helpful, it can make some really big mistakes along the way too.

Yes, it’s not perfect but Meta’s attempt at bettering Wikipedia is definitely being welcomed by many who are tired of false and misleading edits. And while some are done with the intention of being spiteful, others are just human errors coming from those who made genuine mistakes.

To help curb such incidents, we’re hearing about the Wikimedia foundation partnering up with tech giant Meta for a project that definitely deserves a special mention. Think along the lines of two entities having one common interest: To better citations so the masses can benefit. It sounds pretty convincing to us. What do you think?

Some of the biggest errors arise in the footnotes seen on the platform where editors continuously edit and there’s no one to verify or cross-check what’s going on. And remember, we can’t blame them because the program is literally growing by nearly 17,000 articles each month.

So many of the citations are deemed to be incomplete, wrong, or just missing in pieces.

With Meta stepping into the picture, we hear about a new AI-powered tool that scans citations through an automated system. The whole purpose is to check for accuracy while providing recommendations for alternatives, whenever it comes across a passage that’s been sourced inaccurately.

Remember, Wikipedia does claim to have its own share of human editors that try their best to get the best information out there. But these editors rely solely on their common sense as well as their professional experience to get the job done right.

However, with the introduction of AI, we’re seeing work being passed through NLU models whose primary goal is to comprehend the linking of words and concepts delineated in a passage.

For reference, Meta says it has a Sphere database that includes a staggering 134 million different websites which serve as an entire index for knowledge. Hence, every time the checker comes into play, it makes sure all the sources dictated are verified against each claim.

To help illustrate what Meta’s AI tool is capable of, the company recently went public and put forward an example of how such tasks would be performed and how they could benefit us all.

This came in the form of an incomplete citation which the system detected and hence went about fixing with a better alternative that made more sense by verifying through Meta’s Sphere database. And with a better reference being linked, the results looked very promising.

In the future, Meta hopes to make use of the tool to curb problems related to misinformation so more of its online information could gain acceptance of being trustworthy. However, in the meantime, they hope the tool can assist in making Wikipedia a more reliable place to extract information.

At the same time, Meta hopes that its revolutionary technology can give editors at Wikipedia the chance to rectify information and produce correct footnotes in a more sensible way.

Kudos to Meta for taking this step and to Wikipedia for joining in on the breakthrough collaborative efforts.

Read next: Meta and Google’s Ad Spend Duopoly to Dip Below 50% in 2023 as TikTok Provides Stiff Competition
Previous Post Next Post