Field Notes & Essays
Reflections and field notes from my work on human judgement in the age of AI. I write about how conversational AI shapes credibility, trust, and decision-making — and why these systems can sometimes feel smarter than they really are.
Much of the conversation about AI focuses on what the technology can do.
My work looks at something slightly different: how humans judge it.
Conversational AI doesn’t just produce answers. It produces signals, fluent language, confident explanations, and familiar social patterns. Those signals shape how intelligent, knowledgeable, and trustworthy a system appears.
But human-AI interaction is rarely clean or predictable. Our interpretations shift. Context matters. Even the same output can feel credible one moment and questionable the next.
The writing here reflects that reality. Some pieces are field notes from research in progress. Others explore patterns emerging from real interactions with AI systems.
Together they form an ongoing attempt to understand how we decide what to believe when AI speaks.
My Favourite Prompt
Insights about using AI as a mirror and a sparring partner, not as a ghost-writer.
Friction By Design
I’ve started designing friction into the way I work with AI. Because it’s the fastest way I know to think clearly.
The Believability Effect
It’s not just overtrust, and it’s not just a hallucination. It feels like a kind of cognitive sleight of hand. The Believability Effect.
AI Deceives. You’re Helping.
It’s not just the technology that deceives. These systems mislead us when we believe them, when we shouldn’t.
AI as a Commute Companion
How does it feel to use AI on your commute? I promised to try it out and get back to you with my findings…