Page 88 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 88
The Annual AI Governance Report 2025: Steering the Future of AI
Last year, UN Member States adopted the Pact for the Future and Global Digital Compact
-complementing guidance offered by the World Summit on Information Society, currently
undergoing its 20-year review. These frameworks are our compass for a more equitable, rights-
based AI future. Annexes
But a compass can’t move a ship — it can only point it in the right direction. To steer AI progress
towards shared benefits, we need governance mechanisms that are: practical, inclusive, and
rooted in real-world implementation. Those governance mechanisms form our captain’s wheel.
Here let me thank the captains of today’s AI Governance Dialogue: our distinguished Co-Chairs,
His Excellency Engineer Majed Al Mesmar, Director-General of the Telecommunications and
Digital Government Regulatory Authority of the United Arab Emirates, and Madame Anne
Bouverot, France’s Special Envoy for Artificial Intelligence.
As we continue today’s discussions, I invite you to keep three key elements in mind that I believe
can propel AI governance for good forward.
First: inclusion. Too many countries — more than 100 — still have no meaningful voice in global
AI governance discussions. While it is encouraging to see more of these discussions taking
place, from Bletchley Park to Seoul to Paris, and more recently, Kigali, the global reach of the
United Nations can help make AI governance as inclusive as it can possibly be. We are proud to
welcome participants from 170 countries to this year’s Summit. Their perspectives are essential
in designing governance mechanisms that truly reflect global realities, not just high-resource
contexts, but communities navigating limited infrastructure, low trust, and high stakes. Many
governments also lack the resources to engage in — let alone shape their own — AI futures. That
must change… which brings me to the second element: capacity.
Capacity is linked to being connected to infrastructure that includes access to compute, data
centres, and other infrastructure for artificial intelligence. But capacity is also about people
and their ability to make informed decisions. That’s why we need to equip policymakers and
public administrators — especially in developing countries — with the skills to assess, procure,
and deploy AI systems. And it’s why ITU and our partners launched the AI Skills Coalition, and
why we’re working to expand South–South knowledge exchange and regional training hubs.
The third and final element that can steer the AI revolution in the right direction is standards:
because principles and declarations alone are not enough. We need technical standards that
translate high-level commitments into operational safeguards. That’s why earlier this week,
we held consultations at the Open Dialogue on AI Testing, and a workshop on Trustworthy
AI Testing and Validation. These gatherings revealed an urgent need for multistakeholder
collaboration in two key areas of action: promoting knowledge exchange on AI standards, and
bridging capacity gaps in methodologies for testing AI systems and models.
ITU is ready to continue convening these consultations beyond the AI for Good Summit. Because
we cannot leave AI governance to chance. We cannot outsource trust. And we cannot expect
countries to implement safeguards they had no role in designing, and that do not fit their local
context.
Bringing these three elements together – inclusion, capacity and standards - is what coordinated
steering looks like. We saw this in action at today's roundtable luncheon, where participants
highlighted the importance of identifying sources of untapped innovation (like FinTech or open-
source communities in developing countries) to broaden inclusion, and using policy tools to
79