Page 72 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 72

The Annual AI Governance Report 2025: Steering the Future of AI                  Pillars  Chapter 2: Ten









































                   Figure 15: (from left) Sasha Rubel, Head of Public Policy for Generative AI, Amazon
                   Web Services (AWS); Chris Meserole, Executive Director, Frontier Model Forum; Juha
                   Heikkilä, Adviser for Artificial Intelligence, European Commission; Udbhav Tiwari,
                   VP Strategy and Global Affairs, Signal; Ya-Qin Zhang, Chair Professor, Tsinghua
                   University; Brian Tse, CEO, Concordia AI

                   Among key standards which would move AI governance forward, the following were mentioned:

                   •    A global norm where frontier AI firms must publish a risk management framework and
                        provide systematic updates on how they are implementing it.
                   •    An international effort to define "red lines" for unacceptable AI outcomes, emphasizing
                        that the definition of what is acceptable and safe should not be left to the industry alone.
                   •    Streamlining of AI initiatives to curb the proliferation of monitoring and reporting
                        requirements which become a burden on developers.
                   •    Registration and identification of AI-generated content, models and agents.
                   •    Internationalizing Best Practices: There is tremendous potential for collecting and
                        synthesizing best practices from companies and industries into global standards. AI
                        governance and safety should be a "safe zone" for cooperation, transcending geopolitical
                        differences, as it is a matter for humanity as a whole.

                   AI agents will be operating across borders, posing a significant global governance problem.
                   Robert Trager (University of Oxford) highlighted technical directions to address this:

                   1.   Verification: This involves interrogating AI systems to understand their properties and
                        recent actions. A key challenge is performing this verification at the compute provider
                        level, as these providers could be globally distributed.
                   2.   Benchmarking: "The secret of AI governance is benchmarking, benchmarking, and
                        benchmarking," said Professor Trager, and stressed the need for standardized metrics to
                        evaluate AI systems, similar to "0 to 60 miles per hour" metrics for cars. He also pointed
                        out a dual challenge: the need for both global benchmarks (for universal standards) and
                        local benchmarks (to ensure AI conforms to local laws and norms).




                                                            63
   67   68   69   70   71   72   73   74   75   76   77