Blog

Subscribe

Insights on Human-AI Trust & Deception

Real-time reflections on the subtle deceptions and shifting ethics of modern AI — the kind of insights that don’t wait for formal publication.

I share them first on my Substack notebook, where you can subscribe to follow along and join the conversation. It’s an iterative guide to navigating a world that changes daily.

For more structured, pattern-spotting case studies, visit the AI Accidents Gallery.

Field Notes, AI Trust Janneke Ritchie Field Notes, AI Trust Janneke Ritchie

The Believability Effect

For the past few months, I’ve been circling a pattern that shows up every time I use conversational AI: I catch myself believing things I shouldn’t. It’s not just overtrust, and it’s not just a hallucination. It feels like a kind of cognitive sleight of hand, the phenomenon I’m calling the Believability Effect.

Read More
AI Trust Janneke Ritchie AI Trust Janneke Ritchie

Too Human?

This fall, I’m going to be poking at some uncomfortable questions about human-like AI.Like…why does it feel so convincing even when it’s wrong? Why do we trust it more than we should? And how much of that comes down to the fact that we can’t help but treat it like a person?

Read More