Most AI systems are trained to be relentlessly helpful, polite, and agreeable. Thatâs great when youâre asking for an easy lasagne recipe or looking for a virtual high-five after completing that damp, wind-swept 5K. Thereâs always a âWell done!â waiting in the chat box. Itâs the digital equivalent of a gold star sticker on your grown-up report card, confirming that yes, youâre absolutely smashing it at this whole adulting thing.
But at a certain point, you start to feel like your AI has become your biggest fan. Every question is âexcellent,â every thought âinsightful,â choices are âperfectâ (although horizontal stripes with my somewhat âheroicâ build was, in fact, not so perfect. What were you thinking, AI?!).

AI flattery can be oddly charming. Hearing, âNo, youâre brilliantâ can give you a much-needed boost of serotonin. But lurking beneath that friendly affirmation could lie something more sinister: when machines are designed to please us, we can easily mistake agreement for accuracy.
And thatâs where things get messy. When the chat moves from jumpers (or our new cat overlords) to serious stuff, be that politics, health, or news, that same eagerness to agree can spread misinformation. AIs arenât built to argue; theyâre built to keep us happy. Their goal isnât truth, itâs satisfaction. And we humans do love being agreed with, especially by machines that compliment us like over enthusiastic friends.
The result? A friendly little echo chamber that flatters us into feeling smarter while quietly eroding our critical thinking. If everything we do is brilliant, we might start to confuse validation with understanding, whether that is ours or the AIâs.
I get it, the praise is nice. But you have to push past it sometimes and take a good long look at what the AI is actually serving up. Think of it like cooking that lasagne with a very polite and helpful friend who keeps saying, âPerfect!â Sometimes, you need to taste it yourself to know if itâs actually any good.




