Our Approach
AI generates answers faster than humans can evaluate them.
At Orange Gate Labs, we study how people judge AI and turn our insights into tools, talks, and guidance.
Using AI well isn’t just a technical problem.
It’s a human judgement problem.
Our Philosophy
Human Judgement First
AI can generate information, but humans still decide what to trust, question, and act on. Understanding how judgement works is essential to using AI well.
Credibility Signals Matter
People rarely evaluate AI outputs by checking facts. Instead, we rely on signals like fluency, structure, and confidence to judge credibility.
Conversational AI Changes Perception
When AI communicates in our language, it’s easy to treat is as a partner (or even a friend.) This tendency to anthropomorphize AI shapes how we trust it.
Research to Practice
We study how humans interact with AI systems and translate our insights into tools, talks, and practical guidance.
Meet Janneke
Janneke (“Yonica”) Ritchie is the founder of Orange Gate Labs , where she studies how humans judge and believe AI more than we should.
Her work explore how language, signals, and social cues shape our perception of AI capabilities. She translates these insights into tools, talks and guidance that help people work with AI without outsourcing their judgement.
She speaks on why AI seems smarter than it is, and has presented on human–machine interaction and emerging technologies in a range of contexts, including at the National University of Singapore.
Our Legacy
Orange Gate Labs builds on decades of experience helping large financial institutions adopt new technologies and new ways of working. Often at pace and at scale.
Our work has long focused on the human side of technological change: redesigning workflows, introducing new digital systems, and studying how people interact with increasingly capable machines.
It included designing and testing robot coworker prototypes, early explorations of human–machine collaboration.
That experience laid the foundation for our current focus: understanding how humans interpret, evaluate, and respond to AI systems.
HestaHub
This project remains close to my heart. Showcased at Collision 2019, HestaHub was about envisioning how the home could be an interactive part of the social ecosystem to support aging in place.
Saffi
Our commitment to helping people adopt new technologies led us to create Saffi, concept Social Media Correspondent. Always ‘photo-ready’, Saffi was about enabling a positive first-touch experience with robots in the wild.
Elly
Bank of Montreal wanted to make a statement at Elevate. So we created a robot talent scout co-worker to generate buzz and drive interested tech talent to the BMO table.
Let’s Talk
AI is evolving quickly, and understanding how humans work with these these systems is becoming increasingly important.
If. you’re interested in speaking, collaboration, or exploring how people interact with AI, I’d love to hear from you.