Artificial Intelligence and Assessment (at the UT)
This article has been prepared for the benefit of T&A course participants after questions about this topic (Vlas, CELT, 25-6-2024). This page focuses mainly on the more 'problematic' side of Artificial Intelligence (AI) when it comes to reliably assessing knowledge and skills. AI certainly also offers many positive opportunities and can be beneficial for teaching, learning, and assessment. For more about the use of AI for all kinds of educational situations, for examples and to read what CELT is working on [click here].
Ai and assessment
How can we make sure that students really show their personal understanding and abilities if we assess students under conditions that are not fully controlled and supervised? If, for instance, we ask students to write an essay in their own time, at home, how can we make sure it is their own work and not created by an AI tool? If we allow them to use AI, to what extent? For what purposes?
Some assessment methods make the assessment more sensitive to the unauthorized use and potential risk of AI than other methods. CELT/TELT has created a model (see on the right) to illustrate this. By using different types of assessment for a course, you can make the final assessment of someone's knowledge and skills more reliable. However, this requires that this is an option in terms of workload for students and teachers and organizational aspects.
A consideration might be to redesign the assessment method to make it more resilient against the use of large language models or even to rethink the learning objectives.
The UT perspective on the use of AI
The University of Twente's perspective is that the use and impact of this technology will only increase. Detection measures will not be the answer to the change in education over time. Executive Board UT: "We must embrace AI technology carefully and strengthen the human factor in education to adapt and deal with the technology responsibly and ethically."
Nevertheless, the use of AI currently poses at the moment some problems in terms of reliable assessment of students' knowledge and skills. For this purpose, a central document was drawn up to specify rules and expectations: Use of AI in Education at the University of Twente.docx (utwente.nl). In this document, it is stated that students using AI, without the explicit consent of the instructor and acknowledgment of the tool in an appendix, should be considered to have committed academic misconduct. For every assignment or group of assignments, it should be clear if students are allowed to use AI and whether this is with restrictions. If AI tools are allowed, it should be mentioned in the appendix (list all tools that were used during the work). Students using AI, without the explicit consent of the instructor and acknowledgment of the tool in an appendix, should be considered to have committed academic misconduct.
In the Student Charter, applying to all students, a general description is provided for what is seen as academic misconduct or fraud. Also encompasses plagiarism and free-riding. Article 6.7 paragraph 1 of the Student Charter states that it is considered cases of cheating/fraud if "during a test or exam, the student uses (any form of) assistance, resources or devices (electronic or technological) other than the ones whose use the examiner or supervisor has permitted prior to the start of the study unit and/or exam or test, or whose use the student knew or ought to have known was not permitted". Generated Artificial Intelligence programs or applications are considered “assistance, resources or devices” and consequently, prior permission by examiner or supervisor is needed for the use. Article 6.7 paragraph 4 about plagiarism also applies. This implies that the use of Artificial Intelligence needs correct referencing.
If an examiner detects fraud (unallowed use of AI) he/she should report this to the programme's Examination Board. They decide whether cheating/fraud has verifiable occurred and what the consequences will be for the student. The Examination Board of the educational programme drafts Rules & Regulations in which they specify what measures will be taken in cases of (suspected) cheating/fraud.
How to deal with this situation as a teacher?
What can be ways to deal with the challenges posed by AI? Some suggestions:
- Communicate for the assessment clear rules about what is allowed and what not regarding the use of AI. Articulate expectations and your position as a teacher. Explain the reasons behind the decisions about the use.
- Encourage and educate students on responsible and ethical use. Emphasize the significance of academic integrity. Inform the students about the consequences of cheating and plagiarism.
- Make students aware that AI-generated content may not be reliable and should be verified. References for instance might be inaccurate.
- If AI is allowed or to some extent, educate students in ways for attribution.
- There are AI detection software, but in general, they have their limits and are not very reliable. There are also privacy, confidentiality, and other issues involved if the work of students or others is put in a system to check. Check what is and isn't allowed and trustworthy for this purpose.
- Be aware of signs that AI is used. You can try out your one of the generative AI tools for your assignment to know what to expect. Be alert to long and/or complex sentence structures, different terminology (than taught), and other language characteristics you don't readily expect from students. Inconsistency in style or tense may indicate that AI is used for part of the work.
Reconsider the assignment setup
- Create assignments that will make it impossible or less likely to make use of AI or make it easily detectable. For instance use imaged data such as graphs and charts, which cannot be used as prompts. Let students make use of sources that cannot be found on the Internet, for instance, let them consult experts in the field or use self-generated data. Offer (authentic, recent, complex) cases that make it difficult to use AI or will easily show if it is used. Ask for higher-order skills, like analysis, critical thinking, creativity, etc.
- Create an assignment that explicitly asks for personal ideas, own examples, and reflections. Use guiding questions to make sure that students for instance relate their reflections to recent activities or personal experiences.
- Make oral assessments or (poster) presentations part of the evaluation process. Let students explain their thought processes, demonstrate understanding, and defend their work individually or as a group before the assessors and/or peers and the public.
- Ask for other kind of products than text alone, like concept-maps, infographics, a video or audio presentation, a podcast etc.
- Supervise the process. Check regularly how students work on their project and discuss the development of their work over time, at the same time encouraging learning from the process by drafting, revising and reflecting. Let students for instance work on their projects in class, during tutorials.
- Let students make use of AI in a critical way. Let them compare the results with each other. Check references. Come up with counter arguments or information to show a different viewpoint. Etc.
Used sources:
Evaluating the authenticity of ChatGPT responses: a study on text-matching capabilities | International Journal for Educational Integrity | Full Text (biomedcentral.com) || ChatGPT conundrums: Probing plagiarism and parroting problems in higher education practices | Teel | College & Research Libraries News (acrl.org)