Page 63 - The Annual AI Governance Report 2025 Steering the Future of AI
P. 63
The Annual AI Governance Report 2025: Steering the Future of AI
Chapter 2: Ten Pillars for AI Governance
2.1 From Principles to Practice
Almost all countries and organizations now endorse some form of AI principles — fairness,
transparency, accountability, human-centricity. Yet participants noted that these remain
aspirational unless translated into practical tools and mechanisms. Governance must move
“from paper to practice,” ensuring that principles are operationalized through benchmarks,
testing frameworks, and safeguards.
There was a strong emphasis on the need for registries of AI models, independent verification
pipelines, and stress-testing procedures, particularly for frontier systems. Several voices argued
for clearer definitions of “red lines,” or categories of unacceptable risk, such as autonomous
weaponization or large-scale disinformation. Moving from principles to practice also means
investing in institutions that can monitor compliance and enforce standards, not just publishing
values statements.
Brian Tse (CEO, Concordia AI) highlighted the need for more binding regulations for powerful
AI systems, comparing the current situation to having more rules for food safety of dumplings
than for AI. He proposed two key measures already being implemented in China:
o Pre-deployment registration and licensing: All generative AI models must be registered
with the government and undergo safety assessments before public release.
o Post-deployment transparency: AI-generated content should have clear watermarks and
metadata to help users distinguish it from human-created content.
Udbhav Tiwari (VP Strategy and Global Affairs, Signal) emphasized the need for "developer
agency," where application providers can make decisions on behalf of their users, such as
protecting privacy, without requiring users to navigate complex settings. This is crucial for
protecting the vast majority of people who are not AI experts.
Quotes:
• … light touch [regulation] actually requires extremely heavy lifting.” (Chue Hong
Lew, Chief Executive Officer, Infocomm Media Development Authority (IMDA))
• “… we are committed to making sure that our AI is the gold standard and that we
are the partners of choice.” (Jennifer Bachus, then-Acting Head of Bureau, Bureau
of Cyberspace and Digital Policy, USA)
2.2 A Multistakeholder Imperative
AI governance cannot be the domain of states alone. Civil society, academia, industry, and
international organizations all bring expertise and legitimacy. Several participants pointed to
successful multistakeholder models in other domains – for example, Internet governance – as
partial templates for AI.
Inclusive governance was described not only as desirable but as necessary: without broad
participation, governance risks being rejected as illegitimate or captured by narrow interests.
Codes of practice developed in Europe and cross-sector collaborations in Asia were cited
54