Citizen-led AI Governance

We propose establishing Citizens’ Assemblies as foundational elements of the emerging global governance landscape for AI. We will design for their most empowered and impactful use through a process with key stakeholders and experts across government, AI, digital civil society, and deliberative democracy.

We’re organising a formative coalition with international colleagues across deliberative democracy, experts in AI & governance, civil society, media, organisers, and campaigners

Please reach out if you’d like to join the conversation and contribute to this effort.

The global challenges around governing AI reveal our existing institutions as unequal to the problems of our day. It is imperative that these new technologies are shaped and deployed in alignment with human values and aspirations - determined not just by a narrow elite or a small pool of people creating the technology, but in an inclusive, democratic, and deliberative way.

The core belief and insight is that a wide diversity of everyday people — selected by democratic lottery to be representative of all walks of life, deliberating in conditions that enable grappling with complexity and finding super majority common ground — can and should shape the ongoing development and deployment of AI technologies and their impact on society. 

Citizens’ Assemblies are proven methods for expressing an informed, coherent will of “we the people” on complex issues that transcend the zero-sum adversarial dynamics of typical politics. As of November 2021, the OECD has counted almost 600 citizens’ assemblies for public decision making around the world, addressing issues from drug policy reform to biodiversity loss, urban planning decisions, climate change, infrastructure investment, abortion and more. This initiative will design and implement their best fit role in the global governance landscape emerging to address AI and its societal impacts. 


Any new power must be met by the societal capacity to manage it responsibly. 'General Purpose AIs' (GPAIs) are advancing at exponential rates, raising political and social questions about what kind of society we want to live in, and who gets a say in shaping these futures. The development of these GPAIs brings the enormous potential for harm as well as societal benefit, with systemic impacts on many aspects of our everyday lives - from education to work, the economy, media and the information ecosystem, policing and justice, healthcare, and more – as well as impacts on the international security environment in an increasingly multipolar context. 

While there is increasing consensus that AI is and will continue to have substantial societal impact, there is a lack of consensus about what exactly those harms and benefits might be, who will be impacted by them, and which are most important (ethics, values alignment, jobs, existential threats, disinformation, etc).

How should and could AI be regulated? And the societal implications be navigated? There are many open questions: upon which values and principles it should be based, who should decide those criteria and how, by what type of entity, at what scale, and for what reasons, amongst others. Various propositions have been offered: to use existing regulators; create new national ones; empower existing national ones; learn from previous regulatory systems; establish new international bodies or agencies. 

We believe that experts or politicians alone cannot be responsible for answering these inherently political and societal dilemmas. The emphasis needs to be on both the representativeness and the deliberative quality of the engagements, ensuring that the process gives people agency and dignity, leverages collective intelligence, and can result in finding common ground on a clear political mandate. 

Similar Projects

Our newsletter

Powered by Substack. Read the privacy policy. We will not spam you.

Read more