At the Checkpoint


An advisory service for organizations serious about human oversight of AI

The gap no one is talking about

Most organizations now have AI in their processes. Most also have a human who is supposed to review what the AI produces before a decision is made.

The governance conversation has focused on where to put those review moments. Almost no attention has gone to what happens inside them.

A human who glances at a well-structured, fluent AI output and approves it in thirty seconds is technically in the loop. The question is whether meaningful review is taking place and what the output makes easy to see and easy to miss. Or maybe easy to skip the review altogether.

The question has real consequences for risk, for decisions, and the value your organization is getting from its AI investment.

What this engagement does

At the Checkpoint examines one AI-assisted process in your organization and evaluates how human review is functioning in practice: where oversight is working, where it’s breaking down, and where risk may be accumulating inside workflows that look compliant on paper.

The goal isn’t to slow teams down. It’s to help organizations get real value from the human judgment they already have in place.

Example review checkpoints:

  •  AI-generated patient notes reviewed by clinicians

  • AI-assisted legal or contract review

  • AI-assisted audit, compliance, or risk assessment workflows

What this engagement does

At the Checkpoint is a focused advisory engagement that examines one AI-assisted process in your organization and evaluates how human review is functioning in practice.

The engagement identifies where oversight is working, where review quality breaks down, and where hidden operational risk may be accumulating inside seemingly compliant workflows.

The goal isn’t to slow teams down. It’s to strengthen review quality, improve escalation and accountability practices. To help organizations apply human judgment where it matters most.

This work is informed by original research into how conversational AI outputs influence human judgment, combined with two decades of experience leading technology, governance, and operational change initiatives inside complex organizations.

Example review checkpoints:

  •  AI-generated patient notes reviewed by clinicians

  • AI-assisted legal or contract review

  • AI-assisted audit, compliance, or risk assessment workflows

What’s involved

  • A scoping conversation to identify the right process and review moment

  • Observation of a live or recent AI-assisted workflow

  • Interviews with two or three people involved in review and decision-making

  • A confidential findings report: what is working, what is not, and why

  • A debrief with practical recommendations your team can act on

Be an early adopter

At the Checkpoint is launching with a small number of founding organizations. Early partners receive preferential pricing close collaboration with Janneke Ritchie, and the opportunity to shape an emerging approach to AI oversight and review quality..

This engagement is right for you if you suspect your AI oversight is less robust than it appears on paper, and you want an independent perspective grounded in research, not software sales.

What people are saying

Janneke’s presentation was engaging and informative. Her delivery was well matched to our audience, making it relevant and accessible.

We highly recommend her to anyone looking to better understand how to work with AI.

-Hamilton Association of Volunteers

Interested in a keynote or workshop?