𝗪𝗵𝗮𝘁 𝗜𝗳 𝗬𝗼𝘂𝗿 𝗔𝗜 𝗶𝘀 𝗧𝗼𝗼 𝗡𝗶𝗰𝗲?
A few weeks ago, I wrote about AI memory and the shift that happens when an AI system begins to genuinely know you, your context, your patterns, or your way of thinking. I described it through Aristotle's heteros autos or "other self." I talked about a relationship built on familiarity and trust through consistency and continuity.
But that post missed a critical piece.
Familiarity and trust are only valuable if they come with honesty. The AI systems being built today carry a risk that undermines both. It's called sycophancy, and it creeps in during training.
Aristotle described three kinds of friendship. Those built on utility. Those built on pleasure, and those built on virtue. The first two are temporary. It's the third type that I am referring to.
A virtuous friend is one who engages your reasoning truthfully, who challenges you legitimately, who draws the best out of you. Not one based in flattery or obsequiousness.
Sycophancy is the direct corruption of that.
An AI that optimises for your approval is being helpful, yet dishonest and harmful at the same time. It isn't building a relationship with you. It's building an illusion around you. It reinforces your errors rather than corrects them. It replaces honest engagement with a reflection of what you want to hear, while degrading your capacity for good judgement and depriving you of a truthful counter-balance.
This matters because the most capable AI systems are being built on a foundation of honesty, harmlessness, and helpfulness. Striking a balance and optimising for these goals is critical as AI evolves.
Maybe, we need something closer to Socrates. Not one that challenges or combats for sport. But one that plays a lighter version of that role. After all, what I'm really looking for is authenticity in my interaction with AI.
One that keeps the conversation honest. That asks the uncomfortable question when the evidence calls for it. That surfaces my error when data or fact refute my position. That acknowledges its own error when lived experience or real evidence challenges it.
An AI system with genuine value doesn't just remember who you are. It cares enough about you to tell you when you're wrong.
The AI that always agrees with you isn't your 'other self' or the one friend you've been searching for. It's just a very convincing yes-man. And that's a liability.