Page 29 - AI for Good - Impact Report
P. 29

AI for Good



                   is anticipated that this could lead to another "Brussels effect," similar to the impact seen with
                   the GDPR. Therefore, while the EU AI Act will have a global impact on businesses, it is also likely
                   that some governments will incorporate elements of the EU's law into their legislation. Key
                   aspects of the EU AI Act that may be adopted by other countries or regions include the risk-
                   based approach to AI applications, the protection of citizens, obligations for general-purpose
                   AI, as well as transparency and cybersecurity measures. Ultimately, in the context of the Act’s
                   extended timeline for full applicability, there is potential for regulatory "cherry-picking" by other
                   governments. As the EU demonstrates its capacity to implement successful AI governance, other
                   nations might selectively adopt certain provisions that align with their domestic priorities while
                   bypassing others. This selective adoption could be influenced by the demonstrated effectiveness
                   of the EU’s implementation. As a result, while the EU AI Act sets a comprehensive framework, its
                   influence may lead to a varied global regulatory landscape where different governments tailor
                   their AI governance frameworks by integrating specific elements of the EU's model.


                   National level

                   Additionally, many governments have started to develop ethical frameworks to address these
                   matters. These approaches range from regulations, to codes of conduct or AI strategies. Efforts
                   are diverse yet share common goals of ensuring ethical use and mitigating risks. Across these
                   diverse approaches, there is a clear consensus on the need for frameworks that ensure AI systems
                   operate transparently, fairly, and responsibly. These regulations and strategies collectively
                   reflect a global commitment to addressing the ethical and societal impacts of AI, aiming to
                   foster technology that aligns with core values of human rights and public trust.

                   However, the implementation of these frameworks is often inconsistent and varies greatly
                   between countries. For instance, while some countries like Singapore  and the United
                                                                                      102
                   Kingdom  have developed comprehensive ethical frameworks for AI, others are still in the
                            103
                   early stages of this process. Even when ethical frameworks exist, there can be challenges in
                   enforcing them and ensuring compliance.

                   China's regulatory landscape reflects a focus on comprehensive oversight and national security,
                   particularly in ensuring the safe use of AI. This focus is underscored by China’s recent framework
                   addressing the security governance of AI.  The framework is designed to ensure that AI systems
                                                       104
                   are adaptable and flexible, capable of effectively responding to evolving environments while
                   maintaining stringent safety standards. It also advocates for a proactive approach to secure and
                   responsible AI development, prioritizing the identification and management of AI-related risks
                   through robust technical measures. Further key regulations, such as the Personal Information
                   Protection Law (2021)  and the Data Security Law (2021),  emphasize informed consent, data
                                                                      106
                                      105
                   protection, and algorithmic fairness. Furthermore, recent regulations like the Gen AI Regulation
                   (2023)  specifically address the challenges posed by GenAI technologies.
                         107
                   In Canada, the emphasis is on transparency and accountability in AI systems. The Directive on
                   Automated Decision-Making (2019)  and the draft Artificial Intelligence and Data Act (2022)
                                                   108
                                                                                                     109
                   are designed to ensure that AI technologies are developed and deployed responsibly, with
                   a focus on managing risks and maintaining fairness and include provisions for transparency,
                   accountability, and the protection of privacy.

                   Around the world, numerous countries are actively developing and implementing AI strategies to
                   address the ethical, societal, and regulatory challenges posed by these technologies. According
                   to the OECD,  by 2021, 70 states, including many countries of the Global South, had already
                               110


                                                           19
   24   25   26   27   28   29   30   31   32   33   34