Thinking about the need for Human In The Loop (HITL) and the role of AIITL.
AI is changing the way we do business.
There is no shortage of media coverage, commercial activity and consulting effort to back up this claim. Nor investments by mega corps and startups alike.
AI can make mistakes, and we humans need to oversee what they do
There are concerns. These concerns are appropriate, with emphasis on AI governance, safety and compliance. This is especially the case for this year’s big thing, AI Agents (‘Agentic AI’), where actions and processes can be initiated & completed by an AI without humans triggering those activities. As a result, there is (often glib) global-level discussion of the need for Humans in the Loop (HITL). The argument being that AI can make mistakes, and we humans need to oversee what they do.
What is rarely asked is where in the loop should humans be and with what remit.
Where in the loop should humans be
Of course, we already have people (HITL) monitoring our human-driven processes, providing oversight, supervision, sign-off, enhancement analysis, failure mode analysis, critical thinking, etc.
The better organisations (the mature ones) consider where to insert humans rather carefully. We try to avoid micromanagement, attempt to not burden the process with too much overhead or inefficiency (or at least make it appropriate to the risks and consequences associated with an error in that process). Often we further ask ‘Who watches the Watchmen?’, that is having oversight of the first level of oversight to ensure that oversight is effective. This is typically reflected in the management structure and modelled using RACI (Responsible, Accountable, Consult, Inform) combined with Human Resource processes such as Job Descriptions, Objectives and targets, Performance Metrics, regular reviews, retraining and interventions for poor performance. These approaches are very well established for people; because all humans are (very) fallible. Much like the AIs then…
For AI agents it seems sensible that a similar approach should apply. After all, AIs are fallible. Perhaps almost exactly the same approach should be used to ensure that those AI Agents which perform the tasks until now assigned to people should be monitored in the same way as people.
Furthermore, given this apparent symmetry, another consideration emerges. People are fallible, so perhaps we should have AIs in their oversight loop. AI in the Loop (AIITL).
We should have AI in the Loop
In the short to medium term there will, I predict, be a move from AI as servant, through AI as assistant, to AI as partner. Perhaps it would be wise not only have nuanced HITL, but also AIITL at both the process and executive oversight level. I’ve tried to show this in the diagram above.

The evolution of AI demands that we reevaluate our roles within operational frameworks and business processes. As we transition from AI as a mere tool to AI as a collaborator, the necessity for both Human in the Loop (HITL) and AI in the Loop (AIITL), each providing perspective and quality improvement through complementary capabilities. Striking the right balance between human and AI supervision and intervention will be paramount in harnessing the full potential of AI in our complex and competitive organisations. Done right, we can mitigate risks, enhance performance, and ensure that the synergy between human and artificial intelligence leads to a future where people can thrive and innovate by working with our AI partners. An increasingly dual approach will both provide safeguards and also drive forward a new era of integrated, intelligent operations.

One reply on “Who Watches the Watchmen”
[…] with other Microsoft MVPs and some comments in the conference. While I’m quite pleased with the blog idea I didn’t fancy typing it up from my paper notes. On the other hand, my handwriting, which […]