Page 24 - AI for Good - Impact Report
P. 24

AI for Good



                  Other approaches at the global level mostly focus on GenAI and its risks. In accordance, the G7
                  countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States),
                  and the EU as an observer have taken a proactive approach to establishing principles for AI
                  governance through the G7 Code of Conduct (CoC). Moderated by the Japanese G7 presidency,
                  the document is termed the “Hiroshima process.” The G7 CoC is a voluntary framework that sets
                  high expectations for the responsible development and deployment of AI technologies.  Key
                                                                                                 86
                  components of the G7 CoC include encouraging transparency and accountability throughout
                  the AI lifecycle, from development to deployment. The Code also promotes risk management
                  strategies that anticipate and mitigate potential harms, such as biases in AI decision-making or
                  vulnerabilities that could be exploited by cyberattacks. Furthermore, the Code touches upon
                  themes such as incident reporting, watermarking, data privacy and intellectual property as well
                  as common research and the development and use of international technical standards. While
                  this allows for flexibility and innovation, it also relies heavily on the willingness of companies
                  and governments to adopt and enforce these principles.
                  A series of “AI Safety Summits” were kicked off by the United Kingdom government in November
                  2023, hosting the Bletchley Park Summit. For the first time governments and companies on
                  a global scale came together to discuss mitigating risks of AI technologies, specifically so-
                  called frontier models, the most advanced and sophisticated AI models. Governments and
                  AI companies recognized their shared responsibility in ensuring the safety of these models,
                  particularly in areas critical to national security, societal well-being, and public safety. At the end
                  of the Summit, 29 governments and international institutions signed the Bletchley Declaration.
                  The declaration reinforces the global regulatory framework by addressing the safe use of
                  AI. Moreover, it focuses on a set of concrete measures aimed at enhancing the safety and
                  responsible development of frontier AI technologies. This approach emphasizes the shared
                  responsibility between governments and AI model developers to ensure that these technologies
                  are rigorously assessed for potential risks, particularly those that could impact national security,
                  public safety, and societal well-being. In addition to safety testing, the declaration also calls for the
                  development of shared international standards and best practices for AI governance. A further
                  measure outlined in the declaration is the establishment of a collaborative framework for testing
                  AI models both before and after deployment. The initiatives agreed upon at Bletchley Park,
                  including the establishment of the United Kingdom’s AI Safety Institute,  lay the groundwork
                                                                                  87
                  for ongoing international collaboration in AI governance, ensuring that the development of AI
                  technologies can proceed safely and responsibly. A further summit took place in the Republic
                  of Korea in May 2024 . Since then, other governments including the United States , Canada ,
                                                                                           89
                                                                                                    90
                                     88
                  Japan , the Republic of Korea,  and Singapore  have announced the establishment of AI Safety
                       91
                                                           93
                                             92
                  Institutes. In February 2025 France will be hosting the next session of the AI Safety summits –
                  coined the AI Action Summit. 94
                  In 2023, the OECD revised its definition of AI systems to reflect the latest advancements in
                  technology, providing a foundational framework that governments can use to legislate and
                  regulate AI. This updated definition not only enables harmonization among national policies
                  but also contributes to the development of cohesive global policy frameworks for AI.  The EU
                                                                                              95
                  AI Act – elaborated in the following sections – aligned its definition with the one drafted by the
                  OECD.

                  These three approaches—the UN-led ethical frameworks, the G7’s voluntary code of conduct,
                  and the practical safety measures from the AI Safety Summits, accompanied by the OECD’s
                  definition of AI—demonstrate diverse yet complementary strategies for global AI governance.




                                                           14
   19   20   21   22   23   24   25   26   27   28   29