Page 71 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 71

The Annual AI Governance Report 2025: Steering the Future of AI



                  a robust framework should look like. While there are many voluntary activities, the process for
                  creating formal standards needs to be much faster.

                  Current industry testing, particularly for "frontier safety policies," often involves checking for
                  dangerous capabilities (e.g., assisting with weapon creation or cyberattacks) that developers
                  hope not to see, rather than verifying safety or reliability – a critical distinction for policymakers
                  to grasp. These practices are currently voluntary and lack standardized coordination across
                  the industry.

                  A competitive environment can lead to companies not prioritizing safety. Furthermore, different
                  countries have varied approaches to AI safety, making global collaboration difficult. Continued
                  investment in R&D to define "red lines" has been called for by Ya-Qin Zhang (Chair Professor,
                  Tsinghua University), and Brian Tse (CEO, Concordia AI) proposed an international effort to
                  define “red lines” for unacceptable AI outcomes.

                  Participants stressed the need for proactive mechanisms to detect and mitigate risks before
                  they spiral out of control. Current testing and evaluation regimes are often inadequate, focused
                  on short-term performance rather than long-term systemic risks.

                  Proposals included mandatory pre-deployment testing of high-risk systems, red-teaming
                  exercises to identify vulnerabilities, and the creation of international early warning systems
                  for frontier models. “Safety by design” (Ya-Qin Zhang) was described as essential: building
                  safeguards into AI systems from the outset rather than attempting to patch them after deployment.
                  National AI Safety Institutes, which monitor and publish results to work constructively with
                  industry, play a positive role.

                  This proactive approach was seen not as a brake on innovation but as a foundation for trust. If
                  societies can be confident that risks are being anticipated and managed, the opportunities of
                  AI can be embraced more fully.

                  While some risks can be managed with existing tools, a class of risks that can appear quickly
                  and at "extreme scale" – like bio and advanced cyber threats – requires entirely new risk
                  management instruments to identify issues in advance (Chris Meserole).

                  Boulbaba Ben Amor (Director for AI for Good at Inception, a G42 company) urged policymakers
                  to shift focus from evaluating core AI models to evaluating full AI products and solutions,
                  as these are what end-users and society interact with. He emphasized the need for product
                  conformity and adaptable risk assessments, advocating for the inclusion of diverse cultures,
                  languages, and fields in policies.

                  H.E. Shan Zhongde, Vice Minister, Ministry of Industry and Information Technology, People's
                  Republic of China, emphasized the importance of an open and inclusive approach to constructing
                  an international standards framework for open source.


















                                                           62
   66   67   68   69   70   71   72   73   74   75   76