Considerations for Generative AI Detection 

This blog post is written by Generative AI Working Group members. 

The landscape for learning and teaching in the age of Generative AI has been developing rapidly. As staff will be aware, the UAP regulation has been updated to address the use of AI in student assessment. The UAP Form and penalty table have been updated to include ‘Presenting work generated by AI as if it were your own’ (approved by Academic Board March 2023). 

A Generative AI working group, chaired by Mary Jacob, was created in January 2023 to coordinate university efforts. Please see Generative AI for current guidance and resources. We are designing training materials for staff and students that will be available well before next academic year. 

Advice for marking 

On 3/4/2023, Turnitin enabled its AI detection tool. At present, staff can see the ‘AI Score’ but students cannot. This may change if Turnitin updates the tool later in the year. Please see Launch of Turnitin AI writing and ChatGPT Detection Capability on the LTEU blog and Turnitin’s AI Writing Detection from Turnitin (note that sometimes the same passage can be identified as both AI generated and matching an external source).  

There is a clear consensus among experts in the sector that no AI detection tool can provide conclusive evidence.  

This comes from the QAA, the National Centre for AI in Tertiary Education (sponsored by Jisc), and others. You can find links to this evidence on the Generative AI page, including the QAA recording where Michael Webb from the National Centre explains why this is the case. 

If you face a potential UAP case, your professional judgement is key to making the right call. Here is the best advice we can give departments: 

  1. Use the Turnitin AI detection tool in conjunction with other indicators – The Turnitin detection tool can identify red flags for further investigation but cannot provide evidence in itself.  
  1. Check sources – Gen AI often, but not always, produces fake citations. These can seem plausible at first sight – real authors and real journals, but the article doesn’t exist. Check the sources cited to see if they are 1) real and 2) chosen appropriately for the assignment. Is the source on topic? Is it the type of source a student would have read when writing the assignment (e.g. not a children’s book used as a source for a business case study)? This isn’t conclusive proof of AI use, but it is solid evidence that the student didn’t do things correctly. 
  1. Check facts – Gen AI often produces plausible falsehoods. The text may sound reasonable but include some made-up ‘facts’. Gen AI is not intelligent, but merely a sophisticated predictive text machine, so if you spot something that seems a bit off, check to see if it is a plausible falsehood. 
  1. Check level of detail – AI tends towards overly-generic output, e.g. using abstract terms with no concrete definitions or examples. Is the essay or report written in generalities or does it include concrete examples in enough detail to support the conclusion that a student wrote it? Again, lack of detail isn’t conclusive evidence that the student cheated but it can be a red flag in combination with other factors.  
  1. Hold an interview to determine authenticity – If you see strong indications of unacceptable academic practice, an interview or panel where the student is asked questions about their assignment may be a way to get conclusive evidence. We know this isn’t feasible at large scale, however. This is a sticky problem not only for our university but across the sector.  

To find out more about Generative AI, see the Weekly Resource Roundup for events and materials, e.g. this article specifically about a study on Turnitin’s AI detection: Fowler, G. A. (3/4/2023), We tested a new ChatGPT-detector for teachers. It flagged an innocent student, Washington Post. Fowler explains how they tested it, what they found, and why it generated the false results.  

In short, if staff don’t see anything suspicious other than the Turnitin AI score, we would recommend against bringing a UAP case forward. There’s too much potential for harm if the student really didn’t cheat. 

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*