The GLTR (Giant Language Model Test Room) system has been developed to spot if the text has been written via a language model algorithm. Since there are multiple AI-based tools that have been designed to make fake news and spin-off content, this software will help in identifying all such content. With the help of GLTR system, the precision of detecting of fake text can be increased up to 72%. Usually, researchers know that AI-based software swap words, as a result, the sentence structure and grammar remain fine but they create mismatched-contextual phrases. When this type of content is scanned by the GLTR system, the result shows a predictable pattern that is colored yellow and green only. However, if the content is properly written by a human, the predictability is different because the vocabulary and way of expression are different for everyone.
- Also read: Using Algorithms to Guard Against Deepfakes
In this case, if the content is genuine, it is colored, purple, red, yellow and green, which shows that the content is unique and meaningful. With the help of this system, the user will not only be able to determine if the online web content has been copied but also academic content that has been stolen. Another most important feature that will be groundbreaking in the field of social media marketing and branding will be that GLTR will also help in determining fake and spin-off tweets that can spread misinformation.

Read next: Why Machine Learning is Going to Explode and How You Can Prepare for it
No comments:
Post a Comment