
How TU/e lecturers deal with their students’ AI use
Students are increasingly using AI to support their studies. It can help them learn or practice for exams, but some students also use AI to generate answers to entire assignments. Four TU/e lecturers explain how they detect AI use and what they allow, discourage, or encourage in their courses.
“Write the observations in a human tone, as if a student wrote them.” Some students add sentences like this to the prompts they give an AI bot, so that the output appears to be written by them.
Data science lecturer Natalia Sidorova from Mathematics & Computer Science knows it happened in her course, because this prompt appeared in the comments of one student’s submitted assignment.
“That was unbelievable. The student had apparently not even read the generated text.” Such cases are reported to the Examination Committee and may lead to sanctions.
Since this year, students in her course no longer receive a numerical grade for the assignment, but a pass, fail, or good. If they receive a fail, they fail the entire course; if they receive a good, they get a bonus on top of their exam grade.
According to Sidorova, these adjustments were necessary because of the rapid pace at which the technology is evolving. For example, she tested one of her assignments in 2024 at the start of the academic year by asking ChatGPT to solve it. The bot failed. But within a few months that had changed. “In December, it could solve quite a large part of it.”
According to Sidorova, it is difficult for lecturers to determine exactly how students used AI for an assignment. For some tasks, however, she doesn’t object to students using the tool—for example for language correction, finding errors in code, or as a sparring partner for brainstorming.
Learning module
To develop a strategy for AI in education and help students use AI responsibly, Sidorova launched the education innovation project AIDE within the TU/e DRIVE program. “We first started by looking at where we actually stand.”
Initial results suggest that many students use AI as a tool rather than a shortcut, and that they are aware of the tools’ shortcomings. In some groups there appears to be a correlation between AI use and poorer exam results, “but we still need to analyze that further.” Sidorova ultimately hopes to develop a learning module on responsible AI use based on her findings.
Joep Frens and Janet Huang from Industrial Design also see an increase in AI use, both in their own courses and in the cases that come before the examination board, of which they are both members.
Since the release of ChatGPT 3.5 in 2022, Frens has seen his students’ writing improve dramatically. “It went to perfect overnight.” AI use can also often be recognized through citations referring to sources that don’t exist.
According to him, suppressing AI use is not an option, because he considers that impossible. “As long as it’s possible to generate something, people will do it.” He would rather educate students to use the technology responsibly. Huang shares this view. In some cases she doesn’t mind students using AI, as long as they are honest about it. “They have to be transparent about how they use it.”
What is and isn’t allowed—and how big the ‘risk’ is that students will generate entire reports or assignments—varies from course to course. In a course they teach together with their colleague Kristina Andersen, the risk of students misusing AI is very small.
Resistent to AI
The course is called Digital Craftmanship, where they use the Annotated Portfolio assessment method developed by Andersen. Students must visually document and describe every step in the design and making process. According to Frens, Andersen developed the method long before AI became widespread, but it turns out to be naturally resistant to AI.
“You can’t replicate the design process—you actually have to go through it to say anything meaningful about it,” he explains. The method forces students to reflect on their choices, including how they do or do not use AI.
If they do use it, it’s important that students don’t outsource their learning process to AI, Huang emphasizes. “We want to make sure the diploma goes to the student, not to an AI agent,” Frens adds.
Bert Sadowski, professor at Industrial Engineering & Innovation Sciences (IE&IS), has had a very different experience from the lecturers above. In his course Economic policy in practice: social cost benefit analysis, he sees that students still lack many AI-related skills and use it very little.
As recently as last year, students could have simply put his assignment into ChatGPT to arrive at answers, “but no one did. It didn’t occur to them,” says Sadowski. “I found it disappointing—I thought they were much further along.”
Syllabus check
Bert Sadowski leads an IE&IS task force that focuses on AI in education and is working on guidelines. As part of this, he also developed a chatbot concept with a prompt that allows lecturers in the department to check how AI-proof their syllabi are.
“The bot goes through the syllabus and says: this is an assignment to be done at home; the probability that students will use AI for this is 80 to 90 percent, so don’t use it.” The prompt then provides advice on what type of assignment would be more suitable.
Sadowski tried to roll out the syllabus check across the university, but due to limitations of Microsoft Copilot and issues with access and permissions, that has not yet been possible.
Still, he didn’t want to take any risks this year, because he considered the likelihood that students would now use AI too high. Students now have to complete the assignment in Excel while he watches.
Sadowski has structured his course in several layers, gradually integrating AI. In the first phase, students must do the Excel calculations without AI. After that they receive an introduction to AI, where they learn about passive and active prompting. They may then optionally use AI, and at the end of the course there is an assignment in which AI use is mandatory.
In that final phase, students must create a website. “I asked the students whether they had used vibe coding (coding with the help of AI) for this—no one had.” Once again, he expected a different result.
According to Sadowski, it is very important that students master AI skills in addition to the course content, because they will need them in the future. They must learn to use the technology critically and responsibly, he says.
Passive prompts
He also notes that students often still use AI in a very superficial way. “They mostly use passive prompts.” By that he means prompts such as ‘make a summary’ or ‘give suggestions’. Instead, students should use active prompts, he argues, in which they provide clear criteria and ask the AI to reason based on substantive concepts.
“That gives you much better results and more control over the output.” He sees active prompting as a modern academic skill, just as important as writing or data analysis.
In Sadowski’s view, not only students still have a lot to learn about AI—teachers do as well. With a survey, that he is involved in, the university is currently trying to find out what the level of AI literacy is of staff members.
Sadowski would like to see an AI competence center at the university. Whether that will happen and in what form remains unclear. Carlo van de Weijer of EAISI is currently exploring what role the AI institute could potentially play in this.
This article was translated using AI-assisted tools and reviewed by an editor.




Discussion