Page 74 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 74
The Annual AI Governance Report 2025: Steering the Future of AI
• “One bucket [of problems is] where we know what the problem is (scams,
deep fakes, non-consensual sexual imagery, illegal uses of AI.) … [for these]
we need clear rules: What is the line? And who is responsible for holding it? Pillars Chapter 2: Ten
… The second bucket [of problems] are problems that have uncertainty, like is
AI going to kill us all …. you don't have a clear solution. … There you need to
create the bodies that work with industry over time to share information, reduce
informational asymmetries to iterative policy. That doesn't hamper innovation, too,
but also doesn't catch the problem too late.” (Artemis Seaford, Head of AI Safety,
ElevenLabs)
Dive deeper in the Whitepaper “Themes and Trends in AI Governance”:
• 4.1: Landscape of AI Standard Setting Initiatives
• 4.2 Technical Standards Development
• 4.3 Ethical AI Frameworks
• 4.4 Safety Standards and Red-Teaming
• 4.5 Certification and Accreditation Programs
• 6.1 Risks of AI and Systems Safety Assessment
• 6.2 Approaches to Mitigating AI Risks
• 6.3 Corporate Risk Mitigation Practices and its Limitations
• 6.4 Open Source and Open Weight AI: Trajectories, Debates, and Global
Practices
• 6.5 AGI, Existential Riska, and Social Resilience
• 6.6 Verification as a path to reduce risks from AI
2.9 Governance of Compute and Models
"Compute" – short for “computing power” – is the essential resource for both training and
operating AI models. It is a key lever for AI governance because it is a measurable and quantifiable
bottleneck that defines who participates in AI innovation. While training advanced AI models
requires a massive amount of compute over months, the majority of compute resources are
actually used for operating and deploying models due to the millions of daily user requests.
Because access to the most advanced chips and clusters is concentrated in a few countries and
companies, governing compute is seen as an effective way to manage risks.
“Compute governance”, i.e., the rules and policies for overseeing access to and use of advanced
computing resources, can be done by monitoring who has access to the most capable hardware,
setting oversight thresholds for large training runs, or requiring safety measures.
Proposals included licensing regimes for large-scale training runs, registries of high-risk systems,
and shared compute initiatives that democratize access.
65