Page 74 - AI Standards for Global Impact: From Governance to Action
P. 74

AI Standards for Global Impact: From Governance to Action



                  •    Support and encourage the development of conformity assessment frameworks
                       specifically targeting multimedia content, incorporating requirements related to AI risks,
                       misinformation, disinformation, and deepfakes.
                  •    Consider a conformity assessment and/or certification scheme for multimedia content
                       authentication based on international standards, including relevant testing.

                  For technology developers and providers, policymakers could request that they consider the
                  following:

                  •    Adopt a PDR framework based on internationally recognized standards to structure
                       responses to content authenticity challenges.
                  •    Align with and monitor international standards and best practices to meet regulatory
                       requirements and future-proof innovation pipelines.
                  •    Assign a standards liaison or champion within your organization to track updates, ensure
                       compliance, and guide the integration of emerging requirements.
                  •    Consider the integration of strong cryptographic protocols, such as public key infrastructure
                       (PKI_, to enable secure multimedia authentication and content integrity.
                  •    Leverage secure timestamping, tamper-evident hashes, and digital signatures to verify
                       content authenticity while preserving user privacy.


                  10�4  Fighting misinformation through fact-checking and deepfake
                         detection

                  Fact-checkers play an essential role in today’s information environment by helping to limit the
                  sharing of misinformation or disinformation. This session explored the role of fact-checkers
                  and the tools used in the process of verifying information and aimed to provide practical
                  guidelines for financial and social media companies on how to verify visual misinformation
                  and disinformation.

                  The session brought together stakeholders from diverse backgrounds. TikTok, WITNESS,
                  ElevenLabs, Ant Group, and Umanitek shared their views and insights on how they deal with
                  fact-checking and misinformation and the tools that they use.
                  WITNESS shared its global experience supporting frontline journalists and fact-checkers on
                  the detection of deceptive AI, particularly in election and conflict contexts. Examples were
                  highlighted with the Deepfakes Rapid Response Force and from within related trainings and
                  training materials. It was highlighted that there is a need to develop standards for multimedia
                  authenticity and WITNESS shared insight on its work in C2PA.
                  ElevenLabs shared current trends in deepfakes, the state of cooperation between governments
                  and the technology industry, and what the technology industry has already done and can do
                  in future. It was highlighted that there is a need for governments and the technology industry
                  to collaborate

                  TikTok highlighted its work to maintain platform integrity. Insights were provided about its
                  global fact-checking programme that includes more than 20 fact-checking partners, covering
                  more than 60 languages across more than 130 markets. Maintaining platform integrity is crucial
                  to providing a safe space for its users to enjoy authentic content, said TikTok. Alongside fact-
                  checking, TikTok uses a combination of advanced moderation technologies and teams of safety
                  experts. AI is being used to strengthen moderation efforts. In 2024, more than 80% of violent






                                                           62
   69   70   71   72   73   74   75   76   77   78   79