From Risk to Responsibility: Governing AI in the Public Interest
Artificial Intelligence (AI) is increasingly shaping decisions that impact real lives, from determining who qualifies for social services to how communities are prioritised in public health and how migrants are processed at borders. Yet public institutions, built to uphold equity and democratic accountability, are struggling to keep pace with the speed and complexity of AI systems.
/From Lag to Leadership
AI is no longer operating in the background. It is reshaping the systems people rely on every day, in education, healthcare, employment, and social welfare. In these domains, fairness is as much about efficiency as it is about how decisions are made, whether people can understand them, and who is protected when things go wrong.
The European Union’s Artificial Intelligence Act (EU AI Act) has taken a first step in recognising these stakes. Its classification of “high-risk” systems, such as those used in job recruitment, school admissions, or migration risk profiling, marks a shift from seeing AI as neutral infrastructure to something that demands governance. But while regulation is advancing, ethical capacity within public institutions still lags.
/The Limits of Compliance
The Global Conference on AI, Security, and Ethics 2025 in Geneva underscored a crucial truth: Compliance is necessary but not sufficient. A system can follow rules and still be opaque, exclusionary, or misaligned with public values.
This gap becomes especially evident when AI operates with partial autonomy, a concept often referred to as agentic workflows. Without clear channels for accountability or redress, they risk undermining public trust, particularly in contexts such as welfare eligibility, border control, or algorithmic policing.
Ethical governance in these settings means designing systems that are transparent, fair, and provide people with a genuine opportunity to challenge decisions or correct errors when things go wrong.
/Lessons from Public Systems
So, where do we look for guidance? AI may be new, but many of the questions it raises aren’t. Public institutions have long grappled with questions of fairness, accountability, and harm, particularly in complex systems such as healthcare, education, and migration. These sectors offer decades of lessons about what it means to serve people responsibly and equitably.
In public health, decision-making is not based solely on data; it involves informed consent, risk reduction, and maintaining public trust. In education, fairness is shaped not only by how students perform but also by how they are treated. In migration governance, the risks of profiling and exclusion are well-documented and have prompted sustained advocacy and oversight from civil society.
When care, transparency, and participation are built in from the outset, systems function more effectively. When trust breaks down, harm becomes more likely.
Taken together, they offer vital insights for AI policy:
Fair outcomes depend on fair processes.
Trust grows through transparency, not automation.
Systems are more likely to succeed when those most affected help shape them.
/The Role of Civil Society
One of the clearest lessons from both policy and grassroots work is that civil society cannot be treated as an afterthought. Public interest groups, educators, and community leaders are often the first to spot when a system causes harm. They bring lived expertise that technical teams and regulators often overlook.
They also understand how automated decisions affect people’s lives and what’s needed when those systems make mistakes, including the right to appeal, be heard, and seek fair outcomes.
During the Geneva conference, H.E. Omran Sharaf, Assistant Foreign Minister for Advanced Science and Technology of the UAE, emphasized that the increasing availability of advanced technologies necessitates greater international coordination and more responsible knowledge sharing across borders and sectors.
/From Reactive Policy to Responsive Design
This is a moment to step back, not just to regulate faster but to regulate better. Rather than retrofitting AI into existing policy frameworks, we need space for reflection, inclusive design, and long-term thinking.
Identifying who is affected and involving them in the conversation
Shifting our focus from “what’s effective” to “what’s right”
Prioritising transparency, accountability, and public interest from the outset
AI is already influencing how public institutions make decisions, from eligibility for support to access to rights and services. They shape how people access services, how fairly they’re treated, and what recourse they have when something goes wrong.
This is still a moment of possibility. However, shaping AI in the public interest will require more than new tools or regulations; it will necessitate people willing to ask harder questions, listen more closely, and lead with care.