Page 60 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 60
The Annual AI Governance Report 2025: Steering the Future of AI
This mismatch leaves a governance gap at precisely the moment when AI capabilities are
beginning to rival and, in some contexts, exceed human performance. The pacing problem
is compounded by immense uncertainty about future AI risks and applications, even among
developers themselves. Context Chapter 1: Global
One participant lamented that despite at least five years of discussions of AI governance, a
consensus on the right governance model remains elusive. Would a single, powerful nation
need to lead the drive for inclusive AI governance, or does it require a collective effort from
nations, private and public sectors, and leading tech industries, in collaboration with UN leaders,
to build that consensus, she asked?
A second pacing problem is that progress in AI safety is not keeping up with progress in
AI capabilities. As Roman V. Yampolskiy (Professor, Department of Computer Science and
Engineering, University of Louisville) noted, the science of AI alignment and control is "mostly
nonexistent.” The difficulty lies in defining static human values, programming them into evolving,
self-improving AI systems, and testing for unanticipated dangers from systems smarter and
more creative than humans. The current testing paradigm, which relies on anticipating known
risks, breaks down when the system can outperform its designers in complexity and creativity.
A third pacing problem is the time it takes to develop standards. Although ISO/IEC’s AI risk
management standard was lauded, the four to five years it took to develop, as Chris Meserole
(Executive Director, Frontier Model Forum) noted, was far too long.
With opinions on AI policy and governance differing even within research and policy
communities, the lack of a unified approach to AI policy and governance is a major obstacle.
Some panelists emphasized the need for a science and evidence-based approach to provide
a common language and foundation for discussion.
Participants described the pacing problem as more than a timing issue: it is a structural challenge
that undermines the ability of societies to anticipate, adapt, and regulate. The acceleration of AI
introduces the risk that harmful applications emerge before adequate safeguards are in place.
The opportunity here lies in proactive foresight. Several participants advocated for global
horizon-scanning mechanisms, shared early warning systems, and agile governance models
that can adapt quickly as new capabilities emerge.
Quotes:
• “I think that [on] the governance side, we always talk about this pacing problem.
That is, technology moves very fast, and the world of governance is moving much
slower. I think that is probably the greatest challenge in terms of having very
practical and effective governance, regime and policies. (Lan Xue, Distinguished
Professor and Dean, Schwarzman College, Tsinghua University)
• “...while progress in AI is really exponential or hyper-exponential in terms of
capabilities, progress in safety, progress in our ability to control the systems is
linear at best, if not constant.” (Roman V. Yampolskiy, Department of Computer
Science and Engineering, University of Louisville)
51