Page 30 - AI for Good - Impact Report
P. 30

AI for Good



                  published AI strategies or guidelines, reflecting a widespread and growing commitment to
                  shaping the future of AI in ways that align with core values of human rights, transparency, and
                  public trust.

                  The AI for Good Governance Day report also includes a list of major multilateral and national
                  initiatives as of the end of May 2024. 111


                  Common themes of AI governance frameworks

                  A closer examination of existing AI frameworks reveals that, despite differences in their specific
                  legal (regulation or codes of conduct) and cultural contexts, they share several overarching
                  themes. These commonalities reflect the shared concerns and objectives in the governance
                  of AI.

                  Ethical principles and human rights

                  Central to these frameworks is a commitment to ensuring that AI systems are developed and
                  deployed in ways that uphold fundamental ethical principles and human rights. This commitment
                  is reflected in documents such as UNESCO’s Recommendation on the Ethics of AI or the EU AI
                  Act, both of which emphasize the critical importance of protecting human dignity, privacy, and
                  freedom amid the rapid advancement of AI technologies.

                  Safeguards

                  Safety considerations are paramount, reflecting widespread concern over the potential risks
                  inherent in AI technologies. Ensuring the robust and secure operation of AI systems is a top
                  priority, particularly in high-stakes environments such as health care, transportation, and critical
                  infrastructure. Regulatory frameworks increasingly demand that AI systems be designed with
                  built-in safeguards to prevent misuse, whether intentional or accidental. This includes measures
                  to protect against vulnerabilities that could be exploited by malicious actors, as well as protocols
                  to ensure that AI systems can respond effectively to unforeseen challenges or errors. Additionally,
                  there is a strong focus on establishing rigorous testing and validation processes, both before
                  and after deployment, to verify that AI systems perform reliably and do not pose undue risks
                  to public safety. The overarching goal is to create AI systems that not only advance innovation
                  but do so in a manner that prioritizes the well-being and security of individuals and society.

                  Transparency and Accountability

                  Another central theme in AI governance is the emphasis on transparency and accountability
                  within AI systems. Across various regulations, there is a clear requirement that AI processes
                  must be explainable, ensuring that stakeholders understand how decisions are made and
                  that the output is comprehendable. Moreover, those responsible for deploying AI systems are
                  expected to be accountable for their outcomes, reinforcing the importance of ethical AI usage.
                  Data protection and privacy also emerge as critical concerns, highlighting the widespread
                  recognition of the sensitive nature of the data that powers AI systems.

                  At the same time, there is a shared commitment to fostering innovation and economic growth,
                  with many frameworks striving to balance regulatory needs with the imperative to support
                  technological progress. Objectives such as ethical principles, transparency, accountability, data
                  protection, and safety are closely aligned with the UN SDGs.





                                                           20
   25   26   27   28   29   30   31   32   33   34   35