Generative AI in Learning and Teaching: Case Study Series

We are working on a series of case studies to share practices of using Generative AI in Learning and Teaching Activities.

In this series of blogposts, colleagues who are using Generative AI in their teaching, will share how they went about designing these activities.

We’re delighted to welcome Dr Gareth Hoskins (tgh@aber.ac.uk) from DGES in this blogpost.

Case Study # 3: Classroom evaluation of Generative AI in the Department of Geography and Earth Sciences

What is the activity?

This was a classroom evaluation of an AI-generated summary of the scientific concept ‘flashbulb memory’ as part of a lecture on ‘individual memory’ in the 3rd year human geography/sociology module GS37920 Memory Cultures: heritage, identity and power. 

I prompted ChatGPT with the instruction: “Create a 200-word summary of the concept of flashbulb memory”, created a screengrab of the resulting text and embedded this within my lecture slides giving the class 3 minutes to read it and discuss it on their tables asking specifically for responses to the questions:

  • What biases does the content create?
  • Whose interests are served?
  • Where are the sources coming from?
Chat GPT summary of the prompt: Create 200 word summary of the concept flashbulb memory

What were the outcomes of the activity?

Discussion didn’t touch too much on the questions I posed but focused more on the ChatGPT content where students were much more critical of the content than I had anticipated. They noted the dull tone, the repetition, uncertainty surrounding facts presented the vague approach and general lack of specificity. Those students showed a surprising degree of GenAI literacy which was conveyed to the class as a whole. During the discussion, the students became more aware of the utility of GenAI tools, more comfortable speaking about how they use it and might go on to use it, and how its limitations and weaknesses might affect the content it generates.  

I developed the exercise using UCL guidance webpage ‘Designing Assessments for an AI-enabled world’ https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/designing-assessments-ai-enabled-world and re-designed my exam questions on the module to remove generic appraisals of famous academics’ contributions to various disciplinary debates and substitute with hypothetical scenario-based questions that were much more applied.  

How was the activity introduced to the students?

My intension was to acknowledge that we exist in an AI-enabled world which creates opportunities but also problems for learning. I also used the exercise to introduce the risks relating to assessment and outline my own strategy for assessing on this module using real-life problem-based seen-exam questions requiring use of higher-level skills of evaluation and critical thinking applied to “module-only” content and recent academic publications which GenAI essay-writing tools struggle to access. 

How did it help with their learning?

The activity helped students become more familiar with the use GenAI as a “research assistant” (for creating outlines and locating sources) and created an environment for open discussion about the limitations of AI-generated content in terms of vagueness, hallucination, lack of understanding, and lack of access to in-house module content on Blackboard or up-to-date research (articles published in the last two years).

How will you develop this activity in the future?

I would flag other systems including DeepSeek, Gemini, Microsoft Co-Pilot and Claude AI as well as discuss their origins, pros and cons, and crucially caution about environmental and intellectual property consequences.

Keep a lookout for our next blogpost on Generative AI in Learning and Teaching case studies.

Generative AI in Learning and Teaching: Case Study Series

We are working on a series of case studies to share practices of using Generative AI in Learning and Teaching Activities.

In this series of blogposts, colleagues who are using Generative AI in their teaching, will share how they went about designing these activities.

We’re delighted to welcome Dr Megan Talbot (met32@aber.ac.uk) from the Department of Law and Criminology in this blogpost.

Case Study # 2: Law and Criminology Essay

What is the activity?

We designed an assessment to improve AI literacy skills in our family law module.

The students were given a normal essay question: “To what extent should British law recognize prenuptial agreements?”. 

They were also presented with the response of ChatGPTo1 to the same question.

The students were advised that their objective was to write an essay in response to the question. They were free to use the AI response in any way they wanted, they could build off it, use it as a starting point for research or totally ignore it, whatever they prefer. They were told that we would not tell them how the AI essay would score if they submitted it with no modification, but they were free to do that if they wished (none did).

We explained that with the increased use of AI tools they will not only need to be able to use AI outputs competently and responsibly, but also will need to demonstrate that they can add value that an AI cannot. Therefore they should view the task as trying to show that they can perform better than the AI.

What were the outcomes of the activity?

The students generally did very well. We recorded fewer failing marks (below 40%) than previous years, as well as fewer marks below 50%. Very high performing assignments tended to use the text provided by the AI far less than those scoring lower.

How was the activity introduced to the students?

They were provided with the normal assignment briefing sheet, as well as a lecture session on how to approach the assessment. The briefing document included more guidance than normal to help overcome any uncertainty as to how to approach the assessment. This included spesific guidance on things they may be able to do to improve on the AI answer, such as more use of case law, evidence of understanding the caselaw, examining more critical arguments advanced by academics and looking at the peer reviewed literature and writings by legal professionals. Students were also specifically warned about hallucinations (the tendency of AI to provide false information in a way that appears “confident”) and the need to fact check the AI if they were going to rely on it.

What challenges were overcome?

We received a number of questions from higher performing students asking “do I have to use the AI response”, to which we responded “no”. Students generally seemed uncertain as to what they were allowed to do despite a great deal of guidance given in the initial briefing document and accompanying lecture.

Unfortunately, a significant number of students were tripped up by failing to factcheck one of the case descriptions that ChatGPT used, which was inaccurate. Feedback was left on those essays to remind them of the need to factcheck AI resources.

How did it help with their learning?

We did not survey the students on this assignment specifically, but in the SES several of them reported that they found it very useful in understanding the limitations of AI. In conversation, a number of students said it helped them overcome initial procrastination, as they were given a starting point to build from.  Higher scoring students reported reading the AI output, but doing their own research and writing as normal, only referring to the AI to make sure that they did not ignore any core points by mistake.

How will you develop this activity in the future?

We are considering reducing the length of the essay and incorporating a small reflection on their use of AI as a part of the assignment. Additionally, we will be elaborating on the warning to factcheck AI outputs to specifically mention that real cases may be cited but be given misleading or false descriptions or may be cited to support points not addressed by the case.

Keep a lookout for our next blogpost on Generative AI in Learning and Teaching case studies.

Generative AI in Learning and Teaching: Case Study Series

We are working on a series of case studies to share practices of using Generative AI in Learning and Teaching Activities.

In this series of blogposts, colleagues who are using Generative AI in their teaching, will share how they went about designing these activities.

We’re delighted to welcome Dr Panna Karlinger (pzk@aber.ac.uk) from the School of Education in this blogpost.

Case Study # 1 – ResearchRabbit

What is the activity

This activity is focused around finding reliable academic sources for students to use in their coursework. The students are invited to use a ‘seed paper’ for an upcoming assignment to feed into ResearchRabbit, that uses machine learning to map related literature based on authors, citations, related topics or concepts. The students are then prompted to choose sources for their assignments, and critically evaluate these using the CRAAP test – checking the currency, relevance, accuracy, authors and purpose of the sources to pass a judgment on overall reliability before use.

What were the outcomes of the activity?

Students reported an increased confidence and ability to find academic sources and to demonstrate criticality within their work. Despite the vast resources and detailed guidance provided by both the teaching and library staff, students often struggle to find relevant sources to support their work, which was successfully addressed where students engaged with the activity.

How was the activity introduced to the students?

This activity was part of a key skills module, where students had prior knowledge of the CRAAP test, finding sources and had a discussion around and introduction to generative AI, the opportunities and risks involved as well as efficient and ethical use. Synthesising their prior knowledge, the tool was introduced as a demonstration, and then students used their own devices to find sources for a chosen, upcoming assignment for a different module.

What challenges were overcome?

Some students are still wary or skeptical about using AI, or fear being accused of unfair practice, so it was important to demonstrate use cases where they can use AI in a confident manner to help develop these skills. Some students did not have large screen devices on them and the activity was challenging to carry out on a phone, this has to be considered in the future, and some students require more hands-on guidance and support with the activity, this is largely down to digital skills and competence.

How did it help with their learning?

It reinforced some messages about critical AI literacy, evaluating output and sources in general, reminding them of the importance if criticality in their work, and finding further and often more up-to-date information and resources helped inform the coverage and evaluations in their assignments where students engaged as expected.

How will you develop this activity in the future?

As we no longer teach the key skills module, there is an opportunity to embed this into other modules, for instance in assignment support sessions or optional drop-ins. This allows for smaller groups of students and more one-to-one time as necessary, which could make this activity more successful; given that the students received the necessary guidance from the department on the use of AI. This could also be part of research methods modules or guidance we give to PGRs, as this resource is not only free, but also has more advanced capabilities compared to similar literature mapping tools, which was be valuable to anyone working on a dissertation or thesis.

Keep a lookout for our next blogpost on Generative AI in Learning and Teaching case studies. If you are using Generative AI in your teaching practice and would like to submit a blogpost, please contact elearning@aber.ac.uk.