New Red Flags Attributed To ChatGPT After Professors Catch Students Cheating On Essays

The power linked to AI technology is one that cannot be denied. But a new report from the Washington Post is highlighting some alarming red flags linked to ChatGPT.

The news comes after professors caught students cheating on essays, just a week after the AI-driven chatbot was released.

It was definitely a unique case where AI-produced essays were brought into the spotlight, showcasing what else AI could make students do as this was just the start.

The professor who caught the kids cheating became awfully suspicious when the student submitted the on-topic essay that entailed some information that he found to be awfully well-written for a student of that caliber and age.

On that note, the professor made it a point to run the essay through the likes of the ChatGPT detector and he found that it was 99% likely that this essay was produced through AI technology.

Another professor who taught religious studies says that he also caught two of his students doing the same, where essays were produced using ChatGPT. In this case, it was the style of writing that really made him curious.

On that note, he submitted it back to the respective chatbot and asked what was the likelihood that it had been written by this particular program in question. And the answer by the chatbot was interesting. It said 99% and the result was forwarded to the respective students involved and they were asked to submit an application regarding the behavior.

Both professors claim to have really made it a point to confirm their students and they ended up admitting to the ordeal. While some were punished with a failing grade, others were ordered to rewrite the essay from the start.

Some common hints that gave away the ordeal were linked to the likes of unknown references that were never taught by professors in class. There was even one that failed to make any sense and it wouldn’t be wrong to say that the end result was written wrong so well.

As mentioned by the professors, when you go word by word, you notice how it’s so well written but as you go on further inspect it closely, it just fails to make any sense and was terribly wrong.

But one professor says that the reason he could point out that it was created by AI was because of how well it was written. He added that it wrote better than most of his students out there and that was a clear reality check.

In the same way, he explained how the kids that can’t even write or think so well end up writing and thinking a tad bit too well. And it’s just a red flag that something is not right, not to mention the awfully sophisticated form of grammar.
But another professor highlights how grammar may be near to perfect but the general substance lacks attention to detail. There’s a lot of fluff and zero context without any depth or even insight too.

What’s worst is how it’s just awfully hard to prove the plagiarism, until and unless the students end up confessing. So at the end of the day, it’s the academics that are actually left in the toughest spots of them all.

Another interesting point is how many institutions have failed to overcome such forms of cheating. If one student opts to dig in and deny using the chatbot, it’s really hard to prove they are guilty. And yes, the detectors for AI-generated content are certainly good but they are far from perfect.

Read next: People Spend 33% Less Time Reading Individual Emails in 2022 Than in 2018
Previous Post Next Post