Work-shy Students, Outsourced Thinking: GenAI in Education

While public bodies see an urgent need to integrate the use of generative Artificial Intelligence tools in schools' curricula, teachers already notice a sharp decline in performance and creativity.

Nacho Kamenov & Humans in the Loop / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

I spoke with university professors from different fields. At that time of the year, when students have their final exams, and teachers are drowning in ChatGPT-featured incongruities.

“I’ve never had to fail so many people at once for referencing non-existent articles, mixing up authors’ names or referencing digital identifiers that don’t match the original source,” says a philosophy teacher. “Students working on essays on the EU AI Act even cited texts I had written myself, attributing them to entirely different organizations. I would click on the hyperlinks only to find out they didn’t even lead to the original text,” reports the same professor in Catalonia who happens to work for a digital rights organization.

Less ambitious students cannot be bothered to check the AI-generated and then copy-pasted text, a political science teacher noticed: “One of my students left the classic disclaimer at the bottom of the text: ‘ChatGPT can make mistakes. Consider checking important information.’” (I don’t want to play the devil's advocate, but this is something that even happens in the media, when ChatGPT-typical suggestions appear unedited at the bottom of an article.)

AI-generated essays and test answers have become a problem to be addressed in the education sector. In Uruguay, professors at the University of the Republic’s psychology faculty requested the annulment of a series of exams after claiming some students had used AI – they finished the test in barely three minutes and obtained top grades. In China, tech giants such as Tencent and DeepSeek disabled some services to prevent students from cheating during university entrance exams. A recent investigation in the UK detected almost 7,000 proven cases of cheating using AI tools between 2023 and 2024.

The absence of accountability

Teachers' biggest frustration is the lack of accountability, as they feel left alone by public bodies and regulators. And should the teachers act individually or collectively in response to the AI-powered cheating? The latter is not feasible without the universities' approval or support.

Approaches to deal with the situation vary since there is no binding regulation in place. Estonia, one of the top performers in global education rankings, announced it will provide students and teachers with their own AI accounts and integrate AI tools in a program called “AI Leap.” In the United States, Ohio State University’s provost talked about making students become “bilingual”: “fluent in both their major field of study and the application of AI in that area.”

There’s another problem: Generative AI is convenient for the students and saves them time, but it might be killing their mental incentive. One teacher told me how her students had been unable to design a restaurant logo after automatically resorting to ChatGPT for ideas. When asked to draw their own concepts by hand, hardly any of the sketches differed from the initial idea provided by the generative model.

Many studies on GenAI in education focus on measuring efficiency and performance. Recent studies from Chinese and Australian universities suggest that higher education students might feel more efficient by using the tools, but they also become dependent on them. Some of such studies' findings are to be taken with a grain of salt as they rely heavily on surveys rather than more objective data. However, the tools' negative impact on critical thinking is even acknowledged by their developers.

Time invested in a task or the accuracy of the output are rather easily quantifiable, psychologist Ujué Agudo, specialized in human-technology interaction, explained to me. Variables such as creativity, reviewing skills or the degree to which students delegate the work are more difficult to measure empirically. Apparently, they are affected the most.


This is an excerpt from the Automated Society newsletter, a bi-weekly round up of news in automated decision-making in Europe. Subscribe here.

Naiara Bellio (she/her)

Reporter

Photo: Studio Monbijou, CC BY 4.0

Naiara Bellio covers the topics privacy, automated decision-making systems, and digital rights. Before she joined AlgorithmWatch, she coordinated the technology section of the Maldita.es foundation, addressing disinformation related to people's digital lives and leading international research on surveillance and data protection. She also worked for Agencia EFE in Madrid and Argentina and for elDiario.es. She collaborated with organizations such as Fair Trials and AlgoRace in researching the use of algorithmic systems by administrations.