Page 23 - AI Governance Day - From Principles to Implementation
P. 23
AI Governance Day - From Principles to Implementation
Investment in AI science
• There is an urgent need for increased investment in AI science, surpassing the funding
allocated to other scientific fields. The potential for AI to surpass human intelligence
through Artificial General Intelligence requires preemptive measures to maintain control
over these powerful entities before their widespread deployment.
Regulatory frameworks and safety measures
• Historical precedents, i.e the uncontrolled risks of nuclear power illustrated by the
Chernobyl disaster, underscore the importance of rigorous safety measures and oversight
in AI development.
• AI applications must be restricted to sectors where safety can be unequivocally ensured,
similarly to the stringent safety requirements in the pharmaceutical and nuclear industries.
The principle of proving safety before deployment should be paramount.
• Current international and domestic policy frameworks are inadequate to address the
rapidly evolving AI landscape. There is a need for new institutions and regulatory tools to
manage AI risks effectively.
International cooperation and summits
• Past and upcoming international summits on AI governance and safety in the UK (2023),
Korea (2024), and France (2025) are important in establishing global standards and
interoperability of AI safety measures.
• These summits aim to foster cooperation among heads of state, government leaders, UN
agencies, and other stakeholders to address the AI divide, promote inclusivity, and ensure
that safety measures are integrated into AI development.
Figure 8: H.E. Ms. Rose Pola Pricemou, Ministre, Guinea (Ministère des Postes, des
Télécommunications et de l'Economie Numérique) during the Multistakeholder Panel
13