Page 27 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 27

The Annual AI Governance Report 2025: Steering the Future of AI



                  to “trailing-edge” work—high-level management and process standards for well-established
                  problems—rather than trying to codify every frontier risk.
                                                                    84
                  One practical way to address this “standardisation gap” would be to adopt ITU’s pre-
                  standardisation focus group model , which has been used in areas such as health and disaster
                                                 85
                  management in collaboration with other UN agencies. This could involve establishing a joint
                  ITU–ISO/IEC group to develop a reference architecture, risk taxonomy, and sandbox test
                  framework. Meanwhile, IEEE could develop additional ethics-by-design guidance.

                  Standards for AI Agents: Agentic AI will require new standards that go beyond today's model-
                  centric norms. No formal standard yet specifies agent-to-agent communication protocols,
                  secure tool APIs, or guardrails for autonomous, self-modifying behaviour. Early prototypes,
                  such as Microsoft's open Agent2Agent (A2A) protocol  and the community-led Model
                                                                      86
                  Context Protocol (MCP) , demonstrate how multi-agent workflows and context exchange
                                        87
                  could function. However, they remain ad hoc specifications outside the remit of any formal
                  standards development organisation. Safety researchers warn that existing test methods do
                  not capture the emergent risks of agents. The newly published Multi-Agent Emergent Behavior
                  (MAEBE) framework is one of the first attempts to measure collective behaviours in the absence
                  of harmonised evaluation methods.
                                                  88

                  4.3  Ethical AI Frameworks

                  UNESCO Recommendation on the Ethics of Artificial Intelligence: On 23 November 2021,
                  UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence – the first multilateral
                  agreement on AI ethics. It commits all 194 Member States to the principles of protecting
                  human rights, transparency, fairness, and human oversight.  To turn these high-level values
                                                                        89
                  into practice, UNESCO created a Readiness Assessment Methodology (RAM), and according
                  to the agency’s Social Sciences directorate, it has 'worked with nearly 60 countries — largely in
                  Africa, the Caribbean and Latin America — to run baseline diagnostics and draft action plans'.
                                                                                                     90
                  While implementation efforts are still in early stages, and some concepts (such as “fairness” and
                  “accountability”) remain open to context-specific interpretation , the Recommendation serves
                                                                          91
                  as a valuable ethical framework. As metrics and regulatory pathways continue to evolve, the
                  Recommendation provides important guidance and supports international convergence on AI
                  ethics, especially in contexts where formal regulatory regimes are still emerging.

                  OECD AI Principles: Adopted by OECD ministers on 22 May 2019 and revised on 3 May 2024 to
                  address the risks posed by generative AI, the OECD Recommendation on Artificial Intelligence





                  84   Huw R. and Ziosi M. (2025, June 9) Can we standardise the frontier of AI? Oxford Martin AI Governance
                     Initiative.
                  85   International Telecommunication Union. (2025, May 5). AI/ML (Pre-) Standardization - AI for Good. AI For
                     Good.
                  86   Arenas, Y., & Brekelmans, B. (2025, May 22). Empowering multi-agent apps with the open Agent2Agent
                     (A2A) protocol | The Microsoft Cloud Blog.
                  87   Model Context Protocol. (2025, March 26). Specification - model context protocol.
                  88   Erisken, S., Gothard, T., Leitgab, M., & Potham, R. (2025, June 3). MAEBE: Multi-Agent Emergent Behavior
                     Framework. arXiv.org.
                  89   https:// www .unesco .org/ en/ artificial -intelligence/ recommendation -ethics.
                  90   UNESCO. (2023). Readiness assessment methodology. A tool of the Recommendation on the Ethics of
                     Artificial Intelligence.
                  91   AllahRakha, N. (2024). UNESCO's AI Ethics Principles: Challenges and Opportunities. International Journal
                     of Law and Policy, 2(9), 24–36.



                                                           18
   22   23   24   25   26   27   28   29   30   31   32