Page 27 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 27
Detecting deepfakes and generative AI: Report on standards for AI
watermarking and multimedia authenticity workshop
The objectives of the standards collaboration are to:
a) Provide a global forum for dialogue on priority topics for discussion across standards
bodies in the area of AI and multimedia authenticity.
b) Map the landscape of technical standards for AI and multimedia authenticity including
but not limited to watermarking, provenance, and detection of deepfakes and generative
AI content while facilitating sharing of knowledge on lessons learned by different
stakeholders.
c) Identify gaps where new standards are required, given the fast-moving nature of the AI
and multimedia authenticity landscape.
d) Support the policy, regulatory requirements and government policy measures with
regards to AI and multimedia authenticity to facilitate transparency and legal compliance
with but not limited to protection of privacy of users, authorship, and the rights of content
owners and consumers.
The work in the standards collaboration will be structured under three main areas:
i) Technical Activities – Mapping the standardization landscape for AI watermarking,
multimedia authenticity, and deepfake detection with a view to identifying gaps where
standards are needed to support related government actions.
ii) Communication – Providing a forum for standards bodies to exchange information and
communicate the outcomes of their work.
iii) Policy – Providing a forum for governments and standards bodies to discuss the alignment
of policies with standards developed and lessons learned.
Participation in the standards collaboration on AI watermarking, multimedia authenticity,
and deepfake detection is open to international, regional and national standards bodies;
governments; companies; industry initiatives; and other relevant organizations.
19