Page 64 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 64

The Annual AI Governance Report 2025: Steering the Future of AI



                   as positive examples. The consensus was that future governance frameworks must embed
                   participation from the outset, not treat it as an afterthought.




                       Quote:                                                                                      Pillars  Chapter 2: Ten

                       •    "The code of practice ... was led by independent chairs and co-chairs with strong
                            multi-stakeholder support. We have had more than 1000 stakeholders involved in
                            this and ... this multi-stakeholder aspect is very important." (Juha Heikkilä, Adviser
                            for Artificial Intelligence, European Commission)







                       Dive deeper in the Whitepaper “Themes and Trends in AI Governance”:
                       •    3.3 Regional AI partnerships
                       •    Annex – Examples of multilateral initiatives [a list of some 40 initiatives]
                       •    Annex – Examples of national initiatives [a list of some 20 initiatives]




                   2.3  Transparency as a Cornerstone of Trust

                   Transparency is seen by many as paramount to gain public trust. Yet the reality, panelists noted,
                   is that transparency has regressed even as models grow more powerful. Documentation of
                   training data, model limitations, and evaluation benchmarks has become thinner in recent
                   releases, leaving policymakers, researchers, and the public in the dark.


                   Professor Robert Trager (Co-director, Oxford Martin AI Governance Institute, University of
                   Oxford) lead a series of discussions throughout the AI of Good Global Summit on how to best
                   address the challenges of AI verification, i.e., the process by which one party can check or
                   validate the actions or assertions of another. The goals of these discussions were to identify
                   gaps in the current AI testing ecosystem and explore solutions. Topics covered included:

                   •    Capacity building for testing AI systems worldwide.
                   •    Developing best practices and standards.
                   •    Creating institutional frameworks for international collaboration.

                   Concrete proposals included mandatory model cards, registries of AI systems, watermarking
                   of AI-generated content, and disclosure of intended uses and known risks. Several participants
                   stressed that transparency must extend beyond the technical level: governments and companies
                   should be clear about how decisions are made, who is accountable, and how citizens can
                   challenge harmful outcomes.

                   The process of moving from research to pre-standardization and eventually to official
                   standardization is necessary but challenging due to the rapid evolution of AI. The hurdles in
                   verifying AI for trustworthiness are significant.











                                                            55
   59   60   61   62   63   64   65   66   67   68   69