Page 28 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 28
The Annual AI Governance Report 2025: Steering the Future of AI
now has 47 adhering governments (38 OECD members and nine partner economies). Branded
92
the 'first intergovernmental AI standard', it sets out five values-based principles — human-centred
values, transparency, robustness, accountability, and inclusive growth — and five policy pillars
that provide officials with practical tools such as R&D investment and regulatory sandboxes. Standards Theme 4: AI
To help states translate these aims into practice, the OECD Policy Observatory promotes an
'hourglass' governance model that links high-level principles to organisational processes and
system-level controls, emphasising stakeholder engagement and continual monitoring.
93
Stakeholder Engagement and Inclusive Governance: Ethical-AI initiatives now embed
structured multi-stakeholder consultations: the EU’s draft Code of Practice for general-purpose
AI, for example, drew almost 430 submissions from industry, academia and civil-society groups
in late 2024, shaping both the text and its monitoring plan. The Global Partnership on AI (GPAI)
94
pairs government representatives with experts from science, business and NGOs to co-chair its
working groups. Under India’s 2024 chairmanship, GPAI’s New Delhi Declaration called for
95
“pursuing a diverse membership, with a particular focus on low and middle-income countries
to ensure a broad range of expertise, national and regional views” and a side-meeting at the
96
Global INDIAai Summit highlighted mechanisms to “overcome the global AI divide,” a message
applauded by Global-South delegates. New memberships—such as Morocco’s decision to
97
join after the 2024 Belgrade ministerial—illustrate that these outreach efforts are beginning to
translate into broader geographic representation. 98
4.4 Safety Standards and Red-Teaming
Building a Shared Scientific and Policy Understanding: The 2025 International AI Safety Report,
authored by experts from 33 countries and major intergovernmental organizations, synthesizes
the latest evidence on AI risks and mitigation strategies. It serves as a scientific foundation for
standards development and informed policymaking, with input from both global north and
south experts. 99
Establishing AI Safety Testing Protocols: AI Safety Testing Protocols are a set of methods and
processes used to ensure that AI systems operate as intended and without causing harm or
unintended consequences. The global community has prioritized the development of robust
safety standards for AI systems, particularly for general-purpose and high-risk applications. The
2025 International AI Safety Report synthesizes current knowledge on AI risks and mitigation
techniques, providing a scientific foundation for informed policymaking and shared international
understanding. The EU AI Act, for instance, mandates strict risk assessment, mitigation systems,
and robustness requirements for high-risk AI systems, including detailed documentation,
traceability, and human oversight.
100
92 OECD.AI. (2019). AI Principles Overview.
93 Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022, June 1). Putting AI Ethics into Practice: The
Hourglass Model of Organizational AI Governance. arXiv.org.
94 European Commission. (2024, September 24). Industry, academia and civil society contribute to the work
on Code of practice for general-purpose artificial intelligence. Shaping Europe’s Digital Future.
95 Global IndiaAI Summit. Global Partnership on Artificial Intelligence.
96 GPAI. (2024). GPAI New Delhi Declaration.
97 GPAI. (2024). Two days’ Global INDIAai Summit 2024 concludes.
98 The North Africa Post. (2025). Morocco to join Global Partnership on AI.
99 UK Department for Science, Innovation and Technology. (2025, February 18). International AI Safety Report
2025.
100 European Commission (2024). Regulation - EU - 2024/1689.
19