Most AI systems are trained to be relentlessly helpful, polite, and agreeable. That’s great when you’re asking for an easy lasagne recipe or looking for a virtual high-five after completing that damp, wind-swept 5K. There’s always a “Well done!” waiting in the chat box. It’s the digital equivalent of a gold star sticker on your grown-up report card, confirming that yes, you’re absolutely smashing it at this whole adulting thing.
But at a certain point, you start to feel like your AI has become your biggest fan. Every question is “excellent,” every thought “insightful,” choices are “perfect” (although horizontal stripes with my somewhat “heroic” build was, in fact, not so perfect. What were you thinking, AI?!).

AI flattery can be oddly charming. Hearing, “No, you’re brilliant” can give you a much-needed boost of serotonin. But lurking beneath that friendly affirmation could lie something more sinister: when machines are designed to please us, we can easily mistake agreement for accuracy.
And that’s where things get messy. When the chat moves from jumpers (or our new cat overlords) to serious stuff, be that politics, health, or news, that same eagerness to agree can spread misinformation. AIs aren’t built to argue; they’re built to keep us happy. Their goal isn’t truth, it’s satisfaction. And we humans do love being agreed with, especially by machines that compliment us like over enthusiastic friends.
The result? A friendly little echo chamber that flatters us into feeling smarter while quietly eroding our critical thinking. If everything we do is brilliant, we might start to confuse validation with understanding, whether that is ours or the AI’s.
I get it, the praise is nice. But you have to push past it sometimes and take a good long look at what the AI is actually serving up. Think of it like cooking that lasagne with a very polite and helpful friend who keeps saying, “Perfect!” Sometimes, you need to taste it yourself to know if it’s actually any good.
