Blog
Insights on Human-AI Trust & Deception
Real-time reflections on the subtle deceptions and shifting ethics of modern AI — the kind of insights that don’t wait for formal publication.
I share them first on my Substack notebook, where you can subscribe to follow along and join the conversation. It’s an iterative guide to navigating a world that changes daily.
For more structured, pattern-spotting case studies, visit the AI Accidents Gallery.
Friction By Design
I’ve started designing friction into the way I work with AI. Not because I want to slow down, but because it’s the fastest way I know to think clearly.
The Believability Effect
For the past few months, I’ve been circling a pattern that shows up every time I use conversational AI: I catch myself believing things I shouldn’t. It’s not just overtrust, and it’s not just a hallucination. It feels like a kind of cognitive sleight of hand, the phenomenon I’m calling the Believability Effect.
AI Deceives. You’re Helping.
It’s an uncomfortable truth: our AI assistants, chatbots—whatever you call them—sometimes feed us inaccurate, misleading, or flat-out wrong information. But here’s the twist: it’s not just the technology that deceives. These systems can mislead us simply because we believe them, when we shouldn’t.
Sycophancy or Small Talk?
From South Park’s satire to real-life commute conversations, I explore the appeal and risks of chatbot companionship. These loose notes don’t offer answers — just questions about reciprocity, anthropomorphism, and what it means to befriend machines.
Too Human?
This fall, I’m going to be poking at some uncomfortable questions about human-like AI.Like…why does it feel so convincing even when it’s wrong? Why do we trust it more than we should? And how much of that comes down to the fact that we can’t help but treat it like a person?
 
                         
            
              
            
            
          
               
 
 
 
