When I first set out to write about the ethics of using generative AI, I thought it would be just a single blog post. But the deeper I dug, the more there was to explore. So, instead of just one post, this topic has turned into a spin-off series of its own (think House of the Dragon to Game of Thrones!)

Over the past few weeks, we’ve explored how generative AI tools like ChatGPT and Perplexity are transforming how users interact with library resources. But with these advancements come important ethical considerations.
The first, and arguably most important, step in using generative AI responsibly is understanding your university’s AI policies. Familiarising yourself with the guidelines ensures you stay academically honest and allows you to make informed decisions about AI use.
Here’s are some things to keep in mind:
University-wide Guidelines
- Review the university’s official policies on using AI in academic work.
- Check for specific rules about AI in assignments, exams, or research projects.
Departmental Advice
- Look for any AI-related guidance provided by your academic department.
- Pay attention to instructions or updates from your module tutors about AI use.
Module-specific Rules
- Some modules may have unique rules about using AI tools.
- Check your module handbook or ask your module coordinator if you’re unsure about what’s allowed.
Consequences of Misuse
- Misusing AI or failing to acknowledge its role could be considered academic misconduct.
- Be aware of the potential consequences, such as:
- Failed assignments.
- Disciplinary action.
- Harm to your academic reputation.
By understanding these policies, you can use AI responsibly and meet the university’s expectations while maintaining academic integrity.