A practical framework for designing human-AI oversight that actually works
Many AI governance failures happen because organisations implement generic human in the loop without considering what they're actually trying to achieve, prevent, or control. This practical approach aligns business reality and emerging AI capabilities.
It synthesise this into 16 different research-backed approaches - from “circuit breaker protocols” for irreversible decisions to “feedback learning” for low-stakes automation. Each designed around giving humans the authority, time, and understanding they need to be genuinely effective - but also very cognizant of AI’s potential to automate routine tasks and perform consistently with the right guardrails in place.
The goal isn’t perfect categorisation but moving beyond generic “human in the loop” to build the systems we actually intend, not the ones we accidentally create.
Real World Examples
E-commerce: email marketing
Goal: Send personalised emails to millions (optimising for speed/volume and quality).
Risks: Poor copy, spam risk (recoverable setbacks).
→ _Batch processing + Spot checking: _AI generates emails, marketing team reviews random samples pre-sending, and monitors overall engagement metrics, intervening when unusual patterns seen
Recruitment: applicant screening
Goal: Support recruitment/HR team by processing hundreds of applications efficiently (volume, quality, compliance)
Risks: Systemic bias, missing great candidates or hiring poor fits (high-impact failures).
→ Monitored automation + regular expert review + approval workflows
AI screens applications for requirements and fit indicators, flagging top candidates and clear rejections. Recruiters review candidates and can override any AI decision. Hiring managers get AI analysis alongside resumes for final interviews.