Human AI Decision Systems

Human-AI Decision Systems by Alex (Sandy) Pentland, Matthew Daggett, and Michae Hurley.

This paper outlines a proposal for a high-performance human-AI decision system, centering on how trust in a human-AI decision system and its performance could be ensured in a commercial enterprise context. The paper also explores how multi-domain task teams could be created to oversee operations. Along the way the authors outline some of the key challenges, including how such decision systems can be built in the context of legacy architecture.

What I like about this paper is its transparency and pragmatism. It kicks things off with an honest admission that that there is a “deep mistrust of user-facing automation and automatic AI systems”. Leaving aside all of the various reasons we might draw up for how we got to this situation, we must address this elephant in the room. Because of our lack of trust in automatic AI systems, many enterprises find themselves suspended in a grand pause, teetering on a precipise that requires conversation and resolutions that honor our humanity. I’ve been thinking of this musicologically as a grand pause, much as a ballet orchestra would take as a clear articulation that one section of the stoy has ended and another will soon begin with a clear spirit of unity and intention.

On the road to user trust it is important to be mindful of where human agency is most needed in human-AI collaborations. The authors draw attention to situations where analysis and interpretation are required. In this grand pause we are meditating on where AI systems will leave space for us to be awesome humans.

[updatd 7.27.25]