Page 8 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 8

v)   For content credentials to work, they will be needed everywhere – across all devices and
                       platforms – and there will need to be broad awareness of their availability and value.
                  vi)  Standards are needed to enable the interoperability of provenance and authenticity
                       verification mechanisms, calling global collaboration on the development of relevant
                       standards.

                  A key outcome of the workshop was the decision to set up a multistakeholder
                  standards collaboration for AI watermarking, multimedia authenticity, and deepfake
                  detection convened by ITU under the World Standards Cooperation�

                  The objectives of the standards collaboration are to:
                  a)   Provide a global forum for dialogue on priority topics for discussion across standards
                       bodies in the area of AI and multimedia authenticity.
                  b)   Map the landscape of technical standards for AI and multimedia authenticity including
                       but not limited to watermarking, provenance, and detection of deepfakes and generative
                       AI content while facilitating sharing of knowledge on lessons learned by different
                       stakeholders.
                  c)   Identify gaps where new standards are required, given the fast-moving nature of the AI
                       and multimedia authenticity landscape.
                  d)   Support the policy, regulatory requirements and government policy measures with regards
                       to AI and multimedia authenticity to facilitate transparency and legal compliance with but
                       not limited to protection of privacy of users, authorship, and the rights of content owners
                       and consumers.

                  The work in the standards collaboration will be structured under three main areas:
                  i)   Technical  Activities  –  Mapping  the  standardization  landscape  for  AI  watermarking,
                       multimedia authenticity, and deepfake detection with a view to identifying gaps where
                       standards are needed to support related government actions.
                  ii)   Communication – Providing a forum for standards bodies to exchange information and
                       communicate the outcomes of their work.
                  iii)  Policy – Providing a forum for governments and standards bodies to discuss the alignment
                       of policies with standards developed and lessons learned.

                  Participation in the standards collaboration on AI watermarking, multimedia authenticity,
                  and deepfake detection is open to international, regional and national standards bodies;
                  governments; companies; industry initiatives; and other relevant organizations.






























                                                           vi
   3   4   5   6   7   8   9   10   11   12   13