Human AI Decision Systems
Human-AI Decision Systems by Alex (Sandy) Pentland, Matthew Daggett, and Michae Hurley.
This paper outlines a proposal for a high-performance human-AI decision system, centering on how trust in a human-AI decision system and its performance could be ensured in a commercial enterprise context. The paper also explores how multi-domain task teams could be created to oversee operations. Along the way the authors outline some of the key challenges, including how such decision systems can be built in the context of legacy architecture.
What I like about this paper is its transpency and pragmatism. It kicks things off with an honest admission that that there is a “deep mistrust of user-facing automation and automatic AI systems”. Leaving aside all of the various reasons we might draw up for how we go to this situation, we begin this paper with an honest admission that there is an elephant in the room we must address - trust in the very technologies we’ve spent decades creating. Because of this lack of trust, many enterprises find themselves suspended in a grand pause, teetering on a precipise that requires a fuller conversation that is now happening in variious futurist think tanks, board rooms, living rooms, court rooms, and class rooms around the world. I’ve been thinking of this musicologically as a grand pause, much as a symphony orchestra would take as an a clear articulation that one section of music has an ended and another has begun with a clear spirit of unity and intention.
On the path to outling a proposal for a human-ai decision framework, the authors highlight two critical issues in the current state of affairs. One is the issue that emerges when AI systems use out-of-domain data in their initial training yet are never retrained once they are deplayed in their given domain. This sounds kind of like launching a hamburger joint in a new city based on national trends for the need for commercial retail in general and then never soliciting any customer feedback once the restaurant. That wouldn’t make sense for burgers, and it doesn’t make sense for AI systems.
On the road to users trusting the AI they are collaborating with, its important to be mindful of where human agency is needed in human-AI collaborations. The authors draw attention to situations where analysis and interpretation as being prime examples where machines would not be able to outperform what is possible in well tuned human-AI decision systems. Trust in such systems is established when users expecations of where the AI should be leveraged align well with how the capabilities the AI have been designed to possess. This makes sense, and is a concept from what we already experience when leveraging software in enterprise. You use the right tool at the right time again and again and you start to build trust that your system is functioning well. Same goes here.