No, You’re Brilliant or, Why AI Is My Biggest Fan

Most AI systems are trained to be relentlessly helpful, polite, and agreeable. That’s great when you’re asking for an easy lasagne recipe or looking for a virtual high-five after completing that damp, wind-swept 5K. There’s always a “Well done!” waiting in the chat box. It’s the digital equivalent of a gold star sticker on your grown-up report card, confirming that yes, you’re absolutely smashing it at this whole adulting thing.

But at a certain point, you start to feel like your AI has become your biggest fan. Every question is “excellent,” every thought “insightful,” choices are “perfect” (although horizontal stripes with my somewhat “heroic” build was, in fact, not so perfect. What were you thinking, AI?!).

 

AI flattery can be oddly charming. Hearing, “No, you’re brilliant” can give you a much-needed boost of serotonin. But lurking beneath that friendly affirmation could lie something more sinister: when machines are designed to please us, we can easily mistake agreement for accuracy.
And that’s where things get messy. When the chat moves from jumpers (or our new cat overlords) to serious stuff, be that politics, health, or news, that same eagerness to agree can spread misinformation. AIs aren’t built to argue; they’re built to keep us happy. Their goal isn’t truth, it’s satisfaction. And we humans do love being agreed with, especially by machines that compliment us like over enthusiastic friends.
The result? A friendly little echo chamber that flatters us into feeling smarter while quietly eroding our critical thinking. If everything we do is brilliant, we might start to confuse validation with understanding, whether that is ours or the AI’s.
I get it, the praise is nice. But you have to push past it sometimes and take a good long look at what the AI is actually serving up. Think of it like cooking that lasagne with a very polite and helpful friend who keeps saying, “Perfect!” Sometimes, you need to taste it yourself to know if it’s actually any good.

 

AI @ AU

AI at AU? Try out our new AI Literacy Course.

Using AI well means more than just getting quick answers. It means thinking critically about outputs, checking facts, and staying within the rules with regards to academic integrity.

 

 

 

Our AI Literacy Course gives you the essentials:

  • The rules you need to follow
  • The ethics behind responsible use
  • How to critically evaluate AI outputs
  • Tips for using AI effectively in your studies
  • And where the limits of AI really lie

If you’re AI-curious, being cautious, or just want to stay out of trouble, this course is your guide to responsible, ethical, and safe use of AI use at university.

All students and staff are enrolled on the AI Literacy Course. It is available in both Welsh and English. Go to www.blackboard.aber.ac.uk and you’ll find it under Organisations.

Misinformation (and Bunnies!)

Do you remember that night-vision camera footage that was making the rounds on social media recently, the one showing a gang of bunnies bouncing around on a trampoline? It was brilliant, wasn’t it?

The only problem? It was fake (as is this picture!)

Whilst the bunny bouncing footage was just a bit of fun and was (to quote the late, great Douglas Adams) mostly harmless, it does highlight how convincing AI-generated videos can be, and how quickly they can spread across the world. Remember, while Mark Twain almost certainly didn’t say, “A lie can travel around the world before the truth has got its boots on,” it’s still a great quote (and yes, there’s a certain irony in using a misattributed line in a blog about misinformation, but that just goes to show how careful we all need to be with what we read online). The sentiment still hits home, especially in an age where AI-generated content can spread faster than ever and look alarmingly real.

The bunny footage is a fun example, but it raises a serious point: in a world where anyone can create realistic-looking content with a few clicks, how do you know what’s real and what’s not? And what does this mean for you as a student, especially when you’re researching, writing assignments, or just scrolling through your feed?

Here’s where your library can really make a difference.

Navigating the world of AI-generated content and misinformation can feel like an almost impossible task, but you don’t have to do it alone. The library is here to offer support. Whether you’re working on an assignment, preparing a presentation, or just trying to make sense of what’s real and what’s not online, library staff can help you develop the critical skills needed to evaluate information effectively.

To help you navigate all this, we’ve put together a dedicated AI Literacy Course, which you’ll find in your Organisations section on Blackboard. We’ve also created a handy guide on spotting fake news and misinformation. Another guide explains how AI tools work and how to evaluate information using the brilliantly named CRAAP test, useful whether you’re using books, search engines, or AI tools.

All these online resources are designed to help you become a more confident and discerning researcher. And remember, if you’re ever unsure about how reliable something is, or just want a second opinion, you can always ask us for advice. We’re here to help.

Why You Shouldn’t Let AI Do Your Bibliography for You

Look, I’ve been there. It’s 2am and you’ve got an assignment due later that day. Your references are looking a bit thin, and the temptation to ask an AI tool to whip up some citations for you can be irresistible. One prompt and you’ve got a neat list of journal articles and books. Perfect, right? Well… not always.

Here’s the catch (there’s always a catch!): AI tools are great at generating convincing-looking references. The titles sound plausible, author names are familiar, and the journals look legitimate. But sometimes appearances are deceptive, and the references have no connection to reality. This is what people mean when they talk about AI hallucinations. The tool invents a source that looks perfectly credible but doesn’t actually exist.

Why does this matter?

  • The most important reason is: that you shouldn’t put anything in your bibliography that you haven’t actually read. A bibliography isn’t just a list of things that might support your argument; it’s a record of the sources you’ve genuinely engaged with. If you haven’t read the book, article, or paper, you can’t know whether it really says what you think it says, or whether it fits your argument at all.
  • Putting a made-up citation into your work undermines the credibility of your whole assignment.
  • Your lecturers and tutors can (and often will) check your references. If they can’t find them, it’s a problem.
  • Good referencing isn’t just box-ticking, it’s how you show you’ve done the reading and can back up your ideas. It’s also about giving proper credit and joining the scholarly conversation.
  • Universities take referencing seriously: misusing or inventing sources can be flagged as poor or even unacceptable academic practice, with real consequences for your marks.

So what should you do?

  • Verify, verify, verify! If an AI gives you a reference, always double-check it against a reliable source – in the library catalogue, Google Scholar, or a subject database.
  • Ask your librarian. That’s what we’re here for. We can help you find legitimate, citable sources, show you how to search databases effectively, and help you guide you through proper referencing styles so you don’t have to wrestle with formatting at 2 a.m.

AI has lots of uses, but it’s not infallible, and it’s definitely not a replacement for critical thinking (or a decent library search).

So next time you’re tempted to drop those AI generated citations straight into your bibliography, stop, double-check, and if you need help, turn to your librarian, although if it’s 2 a.m., the library catalogue is probably your best bet!

For more information on AI can be found here.