Generative AI in Learning and Teaching: Case Study Series

We are working on a series of case studies to share practices of using Generative AI in Learning and Teaching Activities.

In this series of blogposts, colleagues who are using Generative AI in their teaching, will share how they went about designing these activities.

We’re delighted to welcome Dr Megan Talbot (met32@aber.ac.uk) from the Department of Law and Criminology in this blogpost.

Case Study # 2: Law and Criminology Essay

What is the activity?

We designed an assessment to improve AI literacy skills in our family law module.

The students were given a normal essay question: “To what extent should British law recognize prenuptial agreements?”. 

They were also presented with the response of ChatGPTo1 to the same question.

The students were advised that their objective was to write an essay in response to the question. They were free to use the AI response in any way they wanted, they could build off it, use it as a starting point for research or totally ignore it, whatever they prefer. They were told that we would not tell them how the AI essay would score if they submitted it with no modification, but they were free to do that if they wished (none did).

We explained that with the increased use of AI tools they will not only need to be able to use AI outputs competently and responsibly, but also will need to demonstrate that they can add value that an AI cannot. Therefore they should view the task as trying to show that they can perform better than the AI.

What were the outcomes of the activity?

The students generally did very well. We recorded fewer failing marks (below 40%) than previous years, as well as fewer marks below 50%. Very high performing assignments tended to use the text provided by the AI far less than those scoring lower.

How was the activity introduced to the students?

They were provided with the normal assignment briefing sheet, as well as a lecture session on how to approach the assessment. The briefing document included more guidance than normal to help overcome any uncertainty as to how to approach the assessment. This included spesific guidance on things they may be able to do to improve on the AI answer, such as more use of case law, evidence of understanding the caselaw, examining more critical arguments advanced by academics and looking at the peer reviewed literature and writings by legal professionals. Students were also specifically warned about hallucinations (the tendency of AI to provide false information in a way that appears “confident”) and the need to fact check the AI if they were going to rely on it.

What challenges were overcome?

We received a number of questions from higher performing students asking “do I have to use the AI response”, to which we responded “no”. Students generally seemed uncertain as to what they were allowed to do despite a great deal of guidance given in the initial briefing document and accompanying lecture.

Unfortunately, a significant number of students were tripped up by failing to factcheck one of the case descriptions that ChatGPT used, which was inaccurate. Feedback was left on those essays to remind them of the need to factcheck AI resources.

How did it help with their learning?

We did not survey the students on this assignment specifically, but in the SES several of them reported that they found it very useful in understanding the limitations of AI. In conversation, a number of students said it helped them overcome initial procrastination, as they were given a starting point to build from.  Higher scoring students reported reading the AI output, but doing their own research and writing as normal, only referring to the AI to make sure that they did not ignore any core points by mistake.

How will you develop this activity in the future?

We are considering reducing the length of the essay and incorporating a small reflection on their use of AI as a part of the assignment. Additionally, we will be elaborating on the warning to factcheck AI outputs to specifically mention that real cases may be cited but be given misleading or false descriptions or may be cited to support points not addressed by the case.

Keep a lookout for our next blogpost on Generative AI in Learning and Teaching case studies.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*