Page 29 - AI Governance Day - From Principles to Implementation
P. 29
AI Governance Day - From Principles to Implementation
4.3.2 The debate about national vs international AI governance
As AI regulations start to take shape, questions arise as to which areas of AI governance should
be addressed at the national or regional level, and which at the international level.
Items to be regulated internationally might include:
• AI in warfare
• AI and Human Rights: with due consideration given to the guidance by UNESCO and the
Human Rights Council in this regard.
• Data privacy and cross-border data flows
• Interoperability and standards: The International Telecommunication Union (ITU),
International Organization for Standardization (ISO), and International Electrotechnical
Commission (IEC) are collaborating through the World Standards Cooperation (WSC)
framework to develop international AI standards. The ITU has published over 100
AI-related standards, with 120 more in development as of 2024, multiple of them in
collaboration with other UN agencies. ISO and IEC have formed the joint subcommittee
ISO/IEC JTC 1/SC 42 to advance AI standardization, developing foundational standards,
reference architectures, frameworks, and guidelines for trustworthy AI systems.
• Shared research resources, in particular on AI Safety. Some countries have set up, or are
considering setting up, an AI Safety Institute, for example:
– United Kingdom: The UK has established the AI Safety Institute, which is the "first state-
backed organisation focused on advanced AI safety for the public interest." The UK
government published on 18 May 2024 an up-to-date, evidence-based International
Scientific Report on the Safety of Advanced AI.
– United States: The US has created an AI Safety Institute within the National Institute
of Standards and Technology (NIST), following the Executive Order on AI.
– Singapore: Singapore has established a "Generative AI Evaluation Sandbox" to bring
together industry, academic, and non-profit actors to evaluate AI capabilities and risks.
– Canada: The Canadian government has included in its 2024 budget funds to create
an AI Safety Institute of Canada to ensure the safe development and deployment of
AI.
– Japan established an AI safety institute in February 2024.
– The European AI office called for in the European AI Act will cover AI safety but has
a wider scope than just safety and also include research, innovation, deployment
aspects, and international engagement.
Items to be regulated nationally might include:
• Sector-specific AI applications: National regulations, reflecting local needs, values, and
legal systems, can address the deployment of AI in sectors such as healthcare, transport,
finance, manufacturing, human resources, critical infrastructure (gas, water, electricity),
law enforcement, administration of justice, education, national employment.
• Consumer protection
There may also be a hybrid approach, where international guidelines provide a broad framework
while allowing for national or regional specificity.
• Intellectual Property Rights (IPR) in AI
• Interoperability and standards
• Environmental sustainability
19