Page 39 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 39

The Annual AI Governance Report 2025: Steering the Future of AI



                  as independent third-party evaluation often faces challenges due to a nascent reporting culture,
                  limited infrastructure, and insufficient legal and technical protections for researchers.

                  System/Model Card Disclosure and Safety Frameworks: System and model cards — a type
                  of structured documentation that details a model's capabilities and limitations, as well as its
                  training data and safety considerations — have emerged as a best practice for promoting
                  transparency and the responsible deployment of AI. Companies are increasingly publishing
                  such disclosures to inform users, regulators and external researchers about model risks and
                  mitigations. Alongside these disclosures, safety frameworks set out organisational policies for
                  risk assessment, emergency procedures, ongoing monitoring and human oversight.  Yet, the
                                                                                             164
                  effectiveness of these disclosures and frameworks hinges on the quality, completeness and
                  accessibility of the information provided, as well as the capacity to translate abstract statements
                  into tangible results.  In order to ensure consistency across this emerging practice, calls have
                                    165
                  been made for standardised, well-defined metrics and unified approaches. 166

                  Limits of Self-Governance Approaches: Empirical research on AI governance shows that
                  voluntary, industry-led codes of conduct rarely lead to meaningful accountability. Without
                  external audits or sanctions, companies prioritise speed-to-market over risk mitigation, resulting
                  in a failure to curb bias, disinformation, and other issues.  In May 2023, OpenAI's chief executive,
                                                                  167
                  Sam Altman, told the US Senate that “it is essential to develop regulations that incentivize AI
                  safety,” even proposing a federal licensing regime for frontier models.  However, at a follow-
                                                                                 168
                  up hearing in May 2025, he warned that requiring government approval before release would
                  be 'disastrous', marking a significant policy U-turn.  Organisational studies of industry practice
                                                              169
                  describe this shift as indicative of 'minimum viable ethics', whereby corporate AI ethics teams
                  have limited authority, which is defined by product launch schedules and revenue targets.
                                                                                                    170
                  This leaves voluntary governance unable to enforce rigorous standards of safety, transparency,
                  and accountability. Meta-analyses of 84 public- and private-sector AI ethics frameworks show
                  that high-level principles, when not backed by audits or legal sanctions, rarely produce durable
                  protections against bias, disinformation, and other externalities, underscoring the structural
                  limits of self-regulation in the AI sector. 171


                  6.4  Open Source and Open Weight AI: Trajectories, Debates, and
                         Global Practices


                  The debate around open source and open weight AI models has become central to current
                  discussions about access, accountability, and innovation in AI development. While “open
                  source” traditionally refers to models whose architecture, training data, and weights are publicly
                  available, “open weight” models typically allow access to pre-trained weights but not necessarily



                  164   See for instance: Introducing the Frontier Safety Framework. (2024, May 17). Google DeepMind.
                  165   Mukobi, G. (2024b, August 5). Reasons to doubt the impact of AI risk evaluations. arXiv.org.
                  166   Pistillo, M. (2025, January 27). Towards frontier safety policies plus. arXiv.org.
                  167   Maclure, J., & Morin-Martel, A. (2025). AI Ethics’ institutional turn. Digital Society, 4(1).
                  168   U.S. Senate Committee on The Judiciary Subcommittee on Privacy, Technology, & The Law. (2023). Written
                     testimony of Sam Altman, Chief Executive Officer of OpenAI, before the U.S. Senate Committee on the
                     Judiciary Subcommittee on Privacy, Technology, & the Law.
                  169   De Vynck, G., & Tiku, N. (2025, May 9). AI execs used to beg for regulation. Not anymore. The Washington
                     Post.
                  170   Ahlawat, A., Winecoff, A., & Mayer, J. (2024, September 11). Minimum viable ethics: from institutionalizing
                     industry AI governance to product impact. arXiv.org.
                  171   Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11),
                     501–507.



                                                           30
   34   35   36   37   38   39   40   41   42   43   44