James Wood: Improving feedback literacy through sustainable feedback engagement practices

Banner for Audio Feedback

On Wednesday 10 May, the Learning and Teaching Enhancement Unit welcomed Dr James Wood from Bangor University to give some ideas around student feedback design and engagement.

The recording from the session is on Panopto and the PowerPoint slides can be downloaded below:

In the session, Dr Wood outlined

  • The changes to the feedback NSS questions for 2023
  • The purpose of feedback
  • The move away from feedback transmission to one of action
  • Barriers to student feedback engagement
  • Screencasting your feedback

The next big event for LTEU is our annual Learning and Teaching Conference which is taking place between 4 and 6 July. Bookings for this are already open.

If you have any external speakers that you would like LTEU to invite to next year’s series then please email lteu@aber.ac.uk with your suggestion.

Launch of Turnitin AI writing and ChatGPT Detection Capability

Turnitin icon

On 4 April Turnitin will be launching their new AI writing and ChatGPT detection capability which will be added to the Similarity Report.  Before colleagues start using the AI detector, we thought that we would caveat it with the following quotations from authoritative professional bodies in the sector.

Jisc notes: “AI detectors cannot prove conclusively that text was written by AI.”

— Michael Webb (17/3/2023), AI writing detectors – concepts and considerations, Jisc National Centre for AI

The QAA advises: “Be cautious in your use of tools that claim to detect text generated by AI and advise staff of the institutional position. The output from these tools is unverified and there is evidence that some text generated by AI evades detection. In addition, students may not have given permission to upload their work to these tools or agreed how their data will be stored.”

— QAA (31/1/2023), The rise of artificial intelligence software and potential risks for academic integrity: briefing paper for higher education providers

Please also see the Guidance for Staff compiled by the Generative AI Working Group led by Mary Jacob. The guide outlines suggestions for how we can explain our existing assessments to students in ways that will discourage unacceptable academic practice with AI, and also red flags to consider when marking.

You can read more about the Turnitin AI enhancement in this Turnitin blog post.

For guidance on how to use this tool, take a look at Turnitin’s:

Turnitin also published an AI writing resource page to support educators with teaching resources and to report its progress in developing AI writing detection features.

If you have any questions about using Turnitin’s AI writing and ChatGPT detection capability or interpreting the results, please contact the Learning and Teaching Enhancement Unit (elearning@aber.ac.uk).

External Speaker: James Wood: Improving feedback literacy through sustainable feedback engagement practices 

The Learning and Teaching Enhancement Unit is pleased to announce their next external speaker. On 10 May from 14:00-15:30, James Wood from Bangor University will be hosting an online session on Improving feedback literacy through sustainable feedback engagement practices.

James Wood is Lecturer in Education, Assessment and Taught Postgraduate Lead at Bangor University. Prior to this position, James has worked with Kings College London, University College London, Birkbeck University, Greenwich University and Seoul National University.

Session Abstract

Despite the importance of feedback in supporting learning in higher education, there is still much to learn about nurturing sustainable skills for seeking, engaging with, and using feedback. In practice, many students fail to access feedback, and even if courses offer formative assessment in principle, it is only sometimes engaged with or used effectively. It is often argued that students require ‘feedback literacy’ before engagement with feedback is possible. However, in this workshop, we will explore how feedback literacy and receptivity to feedback can emerge as students experience well-designed dialogic feedback practices that offer the opportunity to consider how learning from feedback occurs, the benefits, what constitutes quality and how to evaluate it and how to develop and execute plans to close the gap between current and target performance. I will also discuss how social and non-human factors entangle with learners’ agency to engage in ways that can serve or limit their participation. I will finish with an overview of how technologies can be used to enhance learners’ ability to use feedback effectively and develop relationships and communities that can offer powerful collaborative learning opportunities, as well as emotional support and encouragement. 

The workshop will take place online using Microsoft Teams. Book your place online.

If you have any questions, please contact the Learning and Teaching Enhancement Unit (lteu@aber.ac.uk).

Contract Cheating: Red Flag Checklist Workshop– Materials Available

Turnitin icon

On 20 May, the Learning and Teaching Enhancement Unit were joined by Dr Mary Davies, Stephen Bunbury,  Anna Krajewska, and Dr Matthew Jones for their online workshop: Contract Cheating Detection for Markers (Red Flags).

With other colleagues, they form the London South East Academic Integrity Network Contract Cheating Working Group and have been doing essential work and research into the increased use of essay mills and contract cheating.

The session included lots of practical tips for colleagues to help detect the use of Contract Cheating whilst marking.

The resources from the session are available below:

Further information on Unfair Academic Practice is available in the Academic Quality Handbook (see section 10).

Many thanks to the presenters. We’ve had such great external speaker sessions this academic year; take a look at our External Speakers blogposts for further information.

LTEU External Speaker Event: Contract Cheating Detection for Markers

Banner for Audio Feedback

The Learning and Teaching Enhancement Unit is pleased to announce its next External Speaker Event.

On 20 May 2022 12:30-13:30, Dr Mary Davies, Principal Lecturer in the Business School at Oxford Brookes University, and colleagues will be running a workshop on their interactive red flag checklist resource Contract Cheating Detection for Markers.

Dr Davies will be joined by Stephen Bunbury, Senior Lecturer in Law at the University of Westminster, Anna Krajewska, Director of the Centre for Excellence in Teaching and Learning at Bloomsbury Institute, and Dr Matthew Jones, Senior Lecturer in Politics and International Relations at the University of Greenwich.

This workshop is designed to help staff participants detect potential contract cheating when marking. The presenters belong to the London and South East Academic Integrity Network Contract Cheating Working Group who put together an interactive ‘red flag’ checklist resource Contract Cheating Detection for Markers.

In the workshop, the presenters will explain the red flags that indicate possible contract cheating, through discussing sections of the checklist: text analysis, referencing and the use of sources, Turnitin similarity and text matching, document properties, the writing process, comparison with students’ previous work, and comparison to cohort. Participants will be provided with opportunities to practise using the checklist and to discuss effective ways to help them identify potential contract cheating in student work.

Resources from previous External Speaker events can be found on our blog.

The workshop will take place online using Microsoft Teams. Book your place online.

Please contact the Learning and Teaching Enhancement Unit if you have any questions (lteu@aber.ac.uk).

Rob Nash: External Speaker Materials Available

Why is receiving feedback so hard? Screen grab from Rob Nash's talk

On Friday 11 March, the Learning and Teaching Enhancement Unit hosted Dr Rob Nash, a Reader in Psychology from Aston University. Rob is an expert in feedback and ran a workshop looking explicitly at ways in which we can enhance and develop feedback engagement.

A recording of the transmission elements of the session is available on Panopto. You can also view the slides that he used.

For those of you who are interested in further exploring the terrain of feedback, you can take a look at the references that Rob used in his session:

Our next External Speaker event is Dr Mary Davies from Oxford Brookes who’ll be joined by other colleagues to discuss how we can detect potential contract cheating during the marking process. This workshop will be on 20 May 2022, 12:30-13:30. Booking for the session is already open.

A reminder as well that our Call for Proposals for our Annual Learning and Teaching Conference is currently open.

External Speaker: Feedback Engagement, Dr Robert Nash

Banner for Audio Feedback

The Learning and Teaching Enhancement Unit is pleased to announce our next External Speaker.

On Friday 11 March, 10am-12pm, Robert Nash will be running a masterclass on strategies for feedback engagement.

Bookings for the event are open via the CPD Staff booking page.

The workshop will take place online via Teams. A link will be sent to you before the event. 

Please see below for the session description and speaker biography.

Session Description

Why don’t they listen to my feedback?

Most people prefer to perform well than to perform badly, and one of the primary aims of giving feedback to students is to help them improve their performance. So why do our students so often ignore, resist, and reject the feedback we give them, and what can we do about it? To set the scene for this workshop, we will first consider the extent to which these problems are unique to students. In particular, I will share some insights from diverse domains of social psychology that shed light on the very human motives behind avoiding feedback. With these insights in mind, we will go on to explore the perceived and actual barriers that limit students’ effective engagement with their feedback. We will contemplate practical ways by which we, as educators, might play a role in breaking down these barriers. Throughout these discussions, sustainability is key: with academic workloads spiralling ever higher, our fixes cannot involve us always giving more feedback, quicker feedback, and fancier feedback. I will share my own mixed experiences of trying to implement into my own teaching practice what I’ve learned from almost a decade of working on these problems.

Speaker Biography

Dr Rob Nash is a Reader in Psychology at Aston University, where he is currently Director of Undergraduate Learning & Teaching for the School of Psychology. A experimental psychologist, Rob’s primary expertise is in human memory, particularly the ways in which memories become biased, distorted, and fabricated. However, he also conducts and publishes research on the topic of feedback in education, with an emphasis on how people respond and react when given feedback. Rob is a Senior Fellow of the Higher Education Academy, Associate Editor of the peer-reviewed journal Legal & Criminological Psychology, and co-author of the Developing Engagement with Feedback Toolkit (Higher Education Academy, 2016).

If you’ve got any questions, please don’t hesitate to contact us (lteu@aber.ac.uk).

Updates to E-submission and E-feedback Policy

Banner for Audio Feedback

The updated E-submission Policy has been approved by Academic Enhancement Committee. You can read the updated policy on our E-submission Pages.

The aim of the updated policy was to bring it in line with our Lecture Capture Policy and provide greater clarity over its scope and requirements from staff and students.

One big change that will affect the creation of Turnitin submission points is the introduction of a policy that gives student the option to submit multiple times before the deadline and also to view their Turnitin originality report. In the creation of the Turnitin submission point, choose the following settings:

  • Generate Similarity Reports for Students – Immediately (can overwrite until Due Date)
  • Allow Students to See Similarity Reports – Yes

The updated policy outlines:

  • The scope of E-submission and E-feedback
  • How our E-submission technologies makes use of yours and your students’ data
  • Tips for the submission of electronic work, including deadlines, giving students the opportunity to practice submitting
  • Grading and feedback expectations
  • Electronic submission for dissertations
  • Retention periods
  • Copyright
  • How IT failures are handled
  • Accessibility guidance for staff and students
  • The support available

Our E-submission page outlines all the support and training available for staff on e-submission. If you’ve got any questions about how to use these tools or drop us an email for assistance (elearning@aber.ac.uk).

Demystifying Assessment Criteria

Assessment Criteria serve a number of functions: to render the marking process transparent; to provide clarity about what is being assessed how; to ensure fairness across all submissions; and to provide quality assurance in terms of the subject benchmark statements. While all these reasons are valid and honourable, there are a number of issues at play:

  1. Staff have greater or lesser control of the assessment criteria they are asked to use in marking student work and interpretations of criteria may vary between different staff marking the same assessment.
  2. Assessment criteria are different from standards and the difference between the two must be clearly communicated to students (ie. what is being assessed versus how well a criterion has been met).
  3. Students are often assessment motivated (cf. Worth, 2014) and overemphasis of criteria or overly detailed assessment criteria can lead to a box ticking-type approach.
  4. Conversely, criteria that are too vague or too reliant on tacit subject knowledge can be mystifying and inaccessible to students, especially at the beginning of their degree.

This blog post will not pretend to solve all the issues surrounding assessment criteria but will offer a number of potential strategies staff and departments more widely may employ to demystify assessment criteria, and marking processes, for students. Thus, students become involved in a community of practice, rather than being treated as consumers (cf. Worth, 2014; Molesworth, Scullion & Nixon, 2011). Such activities can roughly be grouped chronologically in terms of happening before, during, or after an assessment.


  • Use assessment criteria to identify goals and outcomes at the beginning of a module, with check-in points in the run up to a deadline.
  • Identify the difficulty in understanding marking criteria. Students are often used to very narrow definitions of success with clear statements that “earn” them points. Combined with a prevalent fear of failure, this can undermine their understanding of the criteria. Additionally, they may feel that they cannot judge their own abilities well in this new context (university). Group discussions not of what criteria mean, but what students understand them to mean, can help identify jargon that requires clarification, allow staff to explain their personal understanding (if they are the marker) and allow students to seek clarification before embarking on an assessment.
  • Highlight the difference between criteria and standards to students (the what and the how well – and how this is distinguished in your discipline).
  • Allocating time to a peer marking exercise using the provided criteria with subsequent group discussion will help students better understand the process.
  • Encouraging students to self mark their work pre-submission using the provided criteria will also help them better understand the process.
  • Using exemplars to illustrate both criteria and standards with concrete examples can be very helpful. This might involve students marking an exemplar in session, with subsequent discussion; annotated exemplars where students gain insights into the marking process; or live feedback sessions where students submit extracts of their work-in-progress that are used (anonymised) to show the whole group the marking process. This then allows for questions and clarification on the judgements a marker makes when working through a submission. Staff may worry that students consider exemplars as “the only right way” to respond to an assessment brief – providing a range of exemplars, especially good ones, can counteract this tendency. Different types of exemplars can be used:
    • ‘Real’ assignments may be best for their inherent complexity (so long as students whose work is used consent to this use and their work is properly anonymised).
    • Constructed exemplars may make assessment qualities more visible.
    • Constructed excerpts (rather than full-length pieces) may be more appropriate when students first learn to look for criteria and how they translate into work as well as allay staff concerns about plagiarism.


  • Use the same language: making the links between assessment criteria, subject standards, and university standards clear through using the same terminology in feedback as appears in assessment criteria and subject benchmark statements.
  • Where multiple markers engage with different groups of students on the same assessment, having exemplars to refer to can help ensure clear standards across larger cohorts.


  • Refer students back to the assessment criteria and preceding discussions thereof when they engage with feedback and marks.
  • Reiterate the difference between criteria and standards.

Simply providing students with access to assessment criteria is not enough. It is essential that staff identify and clarify the distinction between criteria and standards and demystify the language of assessment criteria by examining tacit subject knowledge staff possess by virtue of experience. Using exemplars and group discussion of these in concretising how criteria and standards translate into a submission will provide students with insights into the marking process that enables them to better understand what they are being asked to do. Lastly, staff should repeatedly encourage students to make use of the availability of assessment criteria while they work on their assessments, which should enable students to feel better prepared and more focussed in their responses.


Molesworth, M., Scullion, R., and Nixon, E. (eds.) (2011) The Marketisation of Higher Education and the Student as Consumer, London: Routledge

Worth, N. (2014) ‘Student-focused Assessment Criteria: Thinking Through Best Practice’, Journal of Geography in Higher Education, 38:3, pp. 361-372; DOI: 10.1080/03098265.2014.919441