Page 10 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 10
Detecting deepfakes and generative AI: Report on standards for AI
watermarking and multimedia authenticity workshop
2 Workshop on AI watermarking and multimedia authenticity
The main objectives of the workshop were to:
a) Provide an overview of the current risks posed by deepfakes and AI-generated multimedia
and the related challenges faced by policymakers and regulators.
b) Discuss the effectiveness of AI watermarking, multimedia authenticity, and deepfake
detection technologies, their application use cases, governance issues, and gaps that
need to be addressed.
c) Discuss the areas where technical standards are required.
d) Explore opportunities for collaboration on standardization activities on AI watermarking,
multimedia authenticity, and deepfake detection.
e) Discuss prospective policy measures relevant to global AI governance and their relation
to industry-led initiatives such as C2PA and JPEG Trust and the work of international
organizations.
The workshop was structured as follows:
i) Session 1: Setting the scene – The challenges and risks of deepfakes and generative AI
multimedia.
ii) Session 2: Current state of deepfakes and deepfake detection technology.
iii) Session 3: AI watermarking, multimedia authenticity, and provenance.
iv) Session 4: Standards collaboration to overcome current gaps in AI watermarking and
multimedia authenticity.
The workshop considered the various issues related to generative AI and deepfakes, such as:
i) The safety and security risks posed by generative AI and deepfakes.
ii) Deepfake detection technologies and areas where standards are needed.
iii) Multimedia provenance verification, why it is needed, and how it can help address
challenges posed by deepfakes.
iv) Policy measures to address deepfakes and generative AI.
v) Areas where standards are needed to support government policies relevant to deepfakes
and generative AI.
2