Page 71 - AI Standards for Global Impact: From Governance to Action
P. 71

AI Standards for Global Impact: From Governance to Action



                   The paper offers practical guidance and actionable recommendations, including a regulatory
                   options matrix designed to help policymakers and regulators determine what to regulate
                   (scope), how to regulate (voluntary or mandatory mechanisms), and to what extent (level of
                   effort). It also explores a range of supporting tools – such as standards, conformity assessment   Part 2: Thematic AI
                   mechanisms, and enabling technologies – that can contribute to addressing the challenges
                   of misinformation and disinformation related to multimedia content. At the same time, it
                   emphasizes the importance of striking a balance that enables the positive and legitimate use of
                   either fully or partially synthetic multimedia for societal, governmental, and commercial benefit.

                   One of the major challenges faced by policymakers and regulators is that multimedia authenticity
                   in the case of generative AI is fundamentally a "Black Box."’. There is limited transparency
                   about how these models are developed and trained. The question that looms is how to enable
                   effective governance when the underlying operations are largely opaque. The main challenges
                   about how to ensure trustworthiness and interpretability of multimedia content without stifling
                   innovation intersects with broader concerns. These include how to align with emerging global
                   priorities, such as combatting misinformation, and how they can be shaped or influenced by
                   online safety regulations.

                   According to the paper, there is a recognition from multiple stakeholders that regulatory and
                   enforcement bodies cannot alone build trust in multimedia. All stakeholders need to work
                   together and find new forms of international collaboration and regulation, even perhaps
                   self-regulation. This needs to be coupled with corporate responsibility that fosters trust and
                   promotes human rights, media literacy, and ethics.

                   The paper proposes the adoption of Prevent-Detect-Respond (PDR) frameworks to build trust
                   in multimedia authenticity to address the above challenges. This three-pronged approach aims
                   to provide a scalable, flexible structure that balances regulatory intent with technical feasibility.
                   This framework mirrors existing approaches to privacy (e.g. GDPR and the California Consumer
                   Privacy Act) and cybersecurity (e.g. NIST cybersecurity framework and the Payment Card Industry
                   Data Security Standards). The strength of PDR lies in its simplicity and versatility.; it is widely
                   understood, adaptable throughout sectors, and conducive to regulatory alignment. In the
                   case of privacy, successful approaches emphasize prevention (privacy-by-design), detection
                   (breach notification and monitoring), and response (enforcement actions and mechanisms for
                   user redress).































                                                            59
   66   67   68   69   70   71   72   73   74   75   76