GPT Detection Tools Are Designed To Discriminate Against Non-Native English Speakers, New Study Proves

GPT detection tools are all the rage in today’s modern era of technology where AI reigns supreme.

However, a new study is raising the curtain on how they’re inherently designed in a manner that offers discrimination against those that aren’t native English speakers.

The study further delineated how it might really end up hurting both the educational as well as future work prospects of such individuals.

Moreover, so many different tests linked to AI detectors tend to showcase how they’re often designed to flag texts from English in an incorrect manner that were produced by non-native speakers of the English language. And that in itself is super questionable because these tools are celebrated as having accurate rates that are near to perfect. So as you can imagine, this might be the finest lie out there today.

The news comes to us thanks to researchers at Standford University who stated how they go one step ahead of the rest in terms of seeing how some of the world’s most famous tools for AI detection are working.

For that, they took around 91 essays of English and carried out tests on them as they were written by those whose English proficiency wasn’t the same as native speakers.

These essays happened to be a part of the famous TOEFL. And to many people’s surprise, it’s shocking how more than 50% of them got tagged incorrectly as being produced by AI software by the detectors.


The results were compared against essays produced by individuals that were studying in class eight in American schools. When tested, the results produced showed that more than 90% of them were branded in a correct manner as written by humans. And yes, the same tools for AI detection had been utilized here.

As mentioned by the researchers, GPT detectors show all sorts of biased behavior against writers that are not native to the English Language and that’s a huge deal for obvious reasons.

Most detectors showcase a wide array of bias ordeals and the fact that they depend on things like how perplexing the text is or how simply the software detects what comes next in a sentence and how likely it is at being written through AI programs.

The fact that these programs are punishing such writers for no good reason is beyond shocking, the researchers added and something needs to be done. It’s easy to target such individuals because they have a very limited array of words and expressions to use when compared to native English speakers.

Another interesting point worth a mention is how the research proved that such detectors tend to be tricked by prompting the software to create a language that’s literary. Hence, when ChatGPT tools were used to reproduce the same essays that the system flagged, all of those were now outlined as being produced by humans. How shocking is that?

This in turn leads us to another very valid point that links to how such actions would promote more people that aren’t native speakers of the English language to use GPT tools for written content. Hence, it’s a paradoxical matter for sure.

Read next: Here Are America’s Most Profitable Corporations
Previous Post Next Post