Study Links Frequent AI Chatbot Use to Lower Scores in Programming Course

A new study from the University of Tartu suggests that computer science students who turned most often to AI chatbots during a core programming course tended to achieve lower grades. While many found the tools useful for quick support, the research points to a subtle risk that frequent reliance may hinder skill development.

How the study was set up

The research involved 231 students enrolled in an object-oriented programming course built around Java. The class followed a flipped model: each week included lecture videos, online quizzes, homework tasks, and in-person seminars. Two major tests and a final exam set the benchmarks for performance, alongside continuous coursework.

Students were invited in the eighth week to complete a detailed survey about their experiences. About 72 percent of the class responded, allowing the researchers to link survey answers directly with grades. Among them, 68 percent were men and 32 percent women, a gender distribution broadly typical of the discipline.

Who used chatbots, and who didn’t

Nearly 80 percent of respondents said they had tried an AI assistant at least once during the course. Most used the tools occasionally, while about half of the users engaged with them more regularly. Just 3.9 percent of the overall class reported weekly use, indicating that heavy reliance was rare.

The 20 percent who avoided chatbots gave varied reasons. Some pointed to clear course instructions and said they had no need for extra help. Others preferred traditional approaches such as peer support or official documentation. One student explained that “googling problems often gives a clearer and more accurate answer,” while another admitted they simply enjoyed “solving things with my own head whenever possible.”

How students used AI tools

Among users, the most common applications were debugging, understanding example code, and checking assignment solutions. Students also turned to chatbots for more unusual tasks, including translating working Python code into Java, breaking down instructions that were unclear in a second language, or generating data for group projects. A smaller group used the tools like private tutors, discussing concepts step by step before starting their own work.

Speed and constant availability were the strongest attractions. One student described the assistant as “like a private teacher who answers immediately,” while another valued the freedom to “ask dumb questions without feeling embarrassed.” Students said these features helped them resolve errors faster than searching online or waiting for staff guidance.

Yet frustrations were equally common. Many noted that the assistants sometimes “made something up instead of admitting it didn’t know,” while others grew irritated when solutions included advanced topics not yet taught in the course. Several complained that the tools often rewrote their code unnecessarily rather than simply pointing out problems.

Performance link and statistical findings

The most striking result was the relationship between frequency of chatbot use and exam performance. Spearman’s correlation analysis showed a moderate negative link with the first programming test (r = –0.315) and weaker negative links with the second test, the final exam, and overall course points. In contrast, there was no measurable connection between grades and how helpful students said they found the tools.

This suggests that frequent users were often the ones who struggled more. As the authors noted, the pattern could mean that weaker students turned to AI more often, or that reliance itself limited learning.

Shifts in study habits

Frequent users reported mixed effects on their learning behaviors. Many felt they struggled less with homework and were motivated to attempt more tasks. But they also admitted they explored fewer solution paths and sought less help from teaching assistants. As one student observed, “the more I use AI, the less I think by myself.”


Notably, most students rejected the idea that chatbots prevented them from engaging with course materials, though some hinted at the risk of over-dependence.

Broader lessons

The study highlights both the promise and pitfalls of AI support in education. Students valued chatbots as fast, judgment-free helpers, particularly for debugging. Yet overuse appeared tied to weaker performance, raising questions about when and how these tools should be integrated.

The authors cautioned that the findings reflect a single course at one institution and rely on self-reported data. They suggest future work should include multiple universities and combine surveys with direct usage logs to paint a fuller picture.

For educators, the results point to the need for structure. Chatbots can enhance learning when used as supplements, but unchecked reliance may limit the development of problem-solving skills. Integrating AI into course design, rather than leaving students to navigate the tools independently, may offer a path that balances efficiency with deeper learning.

Read next:

• Consumers Want AI Labels but Doubt Their Own Skills

Indian Court Sides with Government in Dispute Over X and Content Takedown Rules

Previous Post Next Post