Is ChatGPT Superior To Student Writing? The Answer Might Surprise You

Spring is about to end, and the smell of summertime nears us all. This means students are busy with spring examinations, and teachers are working extra hard to ensure ChatGPT is minimally used.

However, new research is gauging the performance of students with the popular AI tool to see if technology surpasses in terms of quality. Interestingly, the answer is not quite what many may have expected.

ChatGPT usage has hit a new high, but this new study says it’s still not an ideal replacement for human writing. Students continue to perform better, especially when it comes to essays writing over AI.

ChatGPT created a lot of anxiety among the teaching workforce, and many feared that humans wouldn’t be using their minds or creativity in writing assignments. Most of the models could produce accurate and coherent material with seamless grammar on assignments. The fact that some go undetected by a great extent is even more worrisome.

Most of the models can produce factually and grammatically correct and coherent material. This enables cheating and undermines the writer’s literary and critical thinking capabilities. Even writers are going as far as to use them in research papers for over two years now.

Thanks to the latest UK group of researchers from East Anglia University, we found that despite the popularity that this tool holds, it’s got a long way to go to match the work quality showcased by actual students.

The research, dubbed Does ChatGPT Write like Students was shared in the Written Communication Journal. More than 145 essays by humans were compared to those produced by ChatGPT. The revelations shared how student essays are much richer in terms of variety and quantity. They have more engagement features, engaging interactions, and have that persuasive aura that tech lacks.

The authors also concluded that ChatGPT-produced essays showed fewer engagement markers and, therefore, limited interactions. So they basically lacked a crucial aspect, which is personality. The essays produced by actual students entailed a great engagement scheme, and that made them so much more compelling to read. You would have direct appeals, rhetorical queries, and personal digressions. The communication was clearer and had the right kind of communication.

Essays produced by ChatGPT showed fluency in linguistics and were impersonal. That means you’re less likely to enjoy, engage, or relate to ChatGPT essays. The authors of the study hope the purpose of the research will make educators understand that it’s not hard to see if students used ChatGPT or not. They can easily distinguish human content from others.

The only disadvantage today is related to the lack of tools for detecting texts made by generative AI. While many online pages do offer such AI checks, they’re not always accurate and fail at highlighting AI-produced text in every instance.

So the next time you pick up that essay task and think you can get away with it, do realize that it’s not hard to spot AI-produced material.


Image: DIW-Aigen

Read next: Game-Changing Digital Technologies to Watch by 2030
Previous Post Next Post