Too Human?

How AI Wins Our Trust (And Sometimes Fools Us)

This fall, I’m going to be poking at some uncomfortable questions about human-like AI.

Like…why does it feel so convincing even when it’s wrong? Why do we trust it more than we should? And how much of that comes down to the fact that we can’t help but treat it like a person?

Truth is, I’m doing more than poking. I’m doing a deep dive from multiple perspectives, including psychology, human-computer interaction, information science, and AI design, into how anthropomorphism shows up in large language models. How much is built in by design? How much do we bring to it ourselves? And what can we do so we don’t get fooled?

Along the way, I’ll share examples, odd little moments from my experiments, and a few “wait… did that just happen?” stories from others. I’ll mix insights, concerns, and practical ways forward, without getting too heavy.

This isn’t about making you afraid of AI. It’s about making you curious, and sharper. I want to figure out how we can get the best from these tools without being led into over-trust, by design or by habit.

So if you’ve ever had an AI conversation that gave you a strange jolt of “Whaaa? That’s weird… ” stick around. And maybe tell me about it. Those moments are where the real learning begins.

Join me as I explore these uncomfortable questions. Subscribe to my Substack to follow the series.



Subscribe


Previous
Previous

Sycophancy or Small Talk?