AI Governance: A Research Agenda

AI Governance: A Research Agenda (2018) by Allan Dafoe. Centre for the Governabnce AI, Future of Humanity Institute, University of Oxford.

Eye on the prize ya’ll

This review is filled with measured calls to ideation sweeping across a host of AI ethical concerns, yet the sentiment that rings out as most primary is the call to vision stated thus, “AI ideal governance aspires to envision, blueprint, and advance ideal institutional solutions for humanity’s AI governance challenges. What are the common values and principles around which different groups can coordinate? What do various stakeholders (publics, cultural groups, AI reseaerchers, elites, governments, corporations) want from AI, in the near-term and long-term?”

These are thought-provoking and necessary questions. The kind that can be asked at dinner tables, classrooms, long bus rides, and on first dates as an ice breaker. As long as we believe AI governance to be someone else’s problem, it will continue to confound and scare our communities. Suiting up with some powerful questions and the courage to engage our people (at home and at work) in rigorous dialogue can lay the foundations for us to dream the rich dreams that are possible re: AI’s benefit to society with gusto.

Intro

The author has done an exceptional job distilling this sprawling and nuanced topic into a series of thought-provoking questions and term clarifications that are the pillars of this document, so I think it best to share as many of those as I can here:

  • AI governance studies how humanity can best navigate the transition to advanced AI systems, focusing on the political, economic, military, governance and ethical dimensions. AI governance focuses on the institutions and contexts in which AI is built. Specifically, AI governance seewks to maximize the odds that people building and using advanced AI have the goals, incentives, worldview, timen, training, resourrces, support, and organizational home necessary to do so for the benefit of humanity.”
  • AI safety focuses on the technical questions of how AI is built.
  • “What do we need to know and do in order to maximize the chances of the world safely navigating this transition?”
  • “What advice can we give to AI labs, governments, NGOs, and publics, now and at key moments in the future?”
  • “What international arrangements will we need – what visions, plans, technologies, protocols, organizations – to avoid firms and countries dangerously racing for short-sighted advantage?”
  • “What will we neeed to know to arrange in order to elicit and integrate people’s values, to deliberate with wisdom, and to reassure groups so that they do not act out of fear?”
  • “Could trends in AI facilitate new forms of international cooperation, such as by enabling strategic advisors, mediators, or surveillance architectures, or by massively increasing the gains from cooperation and costs of non-cooperation?”
  • “If general AI comes to be seen a critical military (or economic) asset, under what circumstances is the state likely to control, close or securitize AI R&D?”

AI Safety

  • “Human overseers may not have the capacity to recognize problems due to the system’s complexity.”
  • “What is the safety production function, which maps the impact of various inputs on safety? Plausible inputs are compute, money, talent, evaluation time, constraints on the actuator, speed, generality, or capability of the devoted system, and norms and institutions conducive to risk rerporting.”

AI Ideal Governance

  • AI Ideal Governance focuses on cooperative possibilities: if we could sufficiently cooperate, what might we cooperate to build?”
  • “An ideal governance aspires to envision, blueprint, and advance ideal institutional solutions for humanity’s AI governance challenges. What are the common values and principles around which different groups can coordinate?”
  • “What do various stakeholders (publics, cultural groups, AI researchers, elites, governments, corporations) want from AI, in the near-term and long-term?”