Wait a Minute…What If I Don’t Want to Use AI?

AI is everywhere right now. It writes summaries, explains theories, fixes grammar, and recommends playlists.
But here’s the thing nobody says out loud: you’re allowed not to use it.

Whether you feel anxious, unsure, or just plain uninterested, opting out of AI is a perfectly valid choice, and one that deserves just as much support as using it.

AI freaks me out a bit…

You’re not alone. For many students, the hesitation comes from not fully trusting what AI will do, sometimes it seems helpful, sometimes it feels confusing. That uncertainty is enough to make anyone pause, especially when you want your work to feel genuinely your own.

Deciding not to use AI doesn’t mean you’re out of touch. It shows you’re thoughtful about how you work and what supports your learning best.

Aren’t there significant ethical concerns around AI?

Yes, and this is exactly what’s so great about our Aber students. So many of them are thinking beyond the tools themselves and considering the bigger picture. They’re raising issues like:

  • Climate impact: AI isn’t magic; it runs on energy. A lot of it.
  • Human cost: Some AI systems rely on low‑paid workers in the Global South who label data or filter harmful content.
  • Tech giants: Using certain tools can feel like indirectly funding companies that don’t always align with your values.

Caring about these issues is not being “dramatic.” It’s being a thoughtful and engaged citizen of the world.

What is happening to my data?

Some students worry about what happens to the things they type into AI tools. Who sees it? Where is it stored? Can it be used to train future models?
If that uncertainty makes you uncomfortable, choosing not to use AI, or using it only for low‑stakes tasks, is absolutely valid.

Is AI Undermining my Confidence?

Whilst AI can sound like a super‑supportive friend, agreeing with everything you say and telling you how brilliant your work is, that can actually be counterproductive. If everything looks “great” all of the time, it becomes harder to spot what really needs improving.

AI can be helpful, but it can also quietly chip away at your confidence if you rely on it too heavily.

Remember: there’s something empowering about putting that last full stop on an assignment you wrote yourself and coming away knowing: I researched this. I wrote this. I understand this.

It’s my choice, right?

AI shouldn’t feel compulsory. Not for essays, not for revision, not for anything. And if anyone makes you feel like you must use it, that’s a conversation worth having with your personal tutor.

In the end, it’s your learning, your values, your choice.

Whether you use AI every day, occasionally, or not at all, you deserve tools, support, and guidance that respect your autonomy.

No, You’re Brilliant or, Why AI Is My Biggest Fan

Most AI systems are trained to be relentlessly helpful, polite, and agreeable. That’s great when you’re asking for an easy lasagne recipe or looking for a virtual high-five after completing that damp, wind-swept 5K. There’s always a “Well done!” waiting in the chat box. It’s the digital equivalent of a gold star sticker on your grown-up report card, confirming that yes, you’re absolutely smashing it at this whole adulting thing.

But at a certain point, you start to feel like your AI has become your biggest fan. Every question is “excellent,” every thought “insightful,” choices are “perfect” (although horizontal stripes with my somewhat “heroic” build was, in fact, not so perfect. What were you thinking, AI?!).

 

AI flattery can be oddly charming. Hearing, “No, you’re brilliant” can give you a much-needed boost of serotonin. But lurking beneath that friendly affirmation could lie something more sinister: when machines are designed to please us, we can easily mistake agreement for accuracy.
And that’s where things get messy. When the chat moves from jumpers (or our new cat overlords) to serious stuff, be that politics, health, or news, that same eagerness to agree can spread misinformation. AIs aren’t built to argue; they’re built to keep us happy. Their goal isn’t truth, it’s satisfaction. And we humans do love being agreed with, especially by machines that compliment us like over enthusiastic friends.
The result? A friendly little echo chamber that flatters us into feeling smarter while quietly eroding our critical thinking. If everything we do is brilliant, we might start to confuse validation with understanding, whether that is ours or the AI’s.
I get it, the praise is nice. But you have to push past it sometimes and take a good long look at what the AI is actually serving up. Think of it like cooking that lasagne with a very polite and helpful friend who keeps saying, “Perfect!” Sometimes, you need to taste it yourself to know if it’s actually any good.

 

AI @ AU

AI at AU? Try out our new AI Literacy Course.

Using AI well means more than just getting quick answers. It means thinking critically about outputs, checking facts, and staying within the rules with regards to academic integrity.

 

 

 

Our AI Literacy Course gives you the essentials:

  • The rules you need to follow
  • The ethics behind responsible use
  • How to critically evaluate AI outputs
  • Tips for using AI effectively in your studies
  • And where the limits of AI really lie

If you’re AI-curious, being cautious, or just want to stay out of trouble, this course is your guide to responsible, ethical, and safe use of AI use at university.

All students and staff are enrolled on the AI Literacy Course. It is available in both Welsh and English. Go to www.blackboard.aber.ac.uk and you’ll find it under Organisations.

Misinformation (and Bunnies!)

Do you remember that night-vision camera footage that was making the rounds on social media recently, the one showing a gang of bunnies bouncing around on a trampoline? It was brilliant, wasn’t it?

The only problem? It was fake (as is this picture!)

Whilst the bunny bouncing footage was just a bit of fun and was (to quote the late, great Douglas Adams) mostly harmless, it does highlight how convincing AI-generated videos can be, and how quickly they can spread across the world. Remember, while Mark Twain almost certainly didn’t say, “A lie can travel around the world before the truth has got its boots on,” it’s still a great quote (and yes, there’s a certain irony in using a misattributed line in a blog about misinformation, but that just goes to show how careful we all need to be with what we read online). The sentiment still hits home, especially in an age where AI-generated content can spread faster than ever and look alarmingly real.

The bunny footage is a fun example, but it raises a serious point: in a world where anyone can create realistic-looking content with a few clicks, how do you know what’s real and what’s not? And what does this mean for you as a student, especially when you’re researching, writing assignments, or just scrolling through your feed?

Here’s where your library can really make a difference.

Navigating the world of AI-generated content and misinformation can feel like an almost impossible task, but you don’t have to do it alone. The library is here to offer support. Whether you’re working on an assignment, preparing a presentation, or just trying to make sense of what’s real and what’s not online, library staff can help you develop the critical skills needed to evaluate information effectively.

To help you navigate all this, we’ve put together a dedicated AI Literacy Course, which you’ll find in your Organisations section on Blackboard. We’ve also created a handy guide on spotting fake news and misinformation. Another guide explains how AI tools work and how to evaluate information using the brilliantly named CRAAP test, useful whether you’re using books, search engines, or AI tools.

All these online resources are designed to help you become a more confident and discerning researcher. And remember, if you’re ever unsure about how reliable something is, or just want a second opinion, you can always ask us for advice. We’re here to help.