Page 7 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 7
ii) It is generally agreed that deepfake and AI-generated content should adhere to current
legislative frameworks that protect copyright or ensure transparency.
iii) Technologies to create deepfake and AI-generated content are becoming more
sophisticated and widely available and making it difficult to distinguish between genuine
content and synthetic content.
Policies and technical standards for responsible AI use
i) Some governments have introduced specific legislation requiring deepfake and AI-
generated content to be labelled, with the aim of preventing AI misuse and ensuring
ethical AI development and deployment.
ii) Such legislation should take consumer rights into account, and the burden to identify
synthetic content should not lie with the consumer.
iii) Technology and media companies could work towards offering tools for labelling
deepfake and AI-generated content on their platforms and make it possible to identify
people who post malicious content.
iv) There is a need for standards for multimedia content labelling and authenticity verification
and the detection of deepfake and AI-generated content.
Improving deepfake detection
i) Deepfake and generative AI techniques are becoming more sophisticated, making it
increasingly difficult to distinguish between content captured by sensors operating in
the real world and content synthesized either completely or partially, often with AI.
ii) Technologies for the detection of deepfake and AI-generated content must be
continuously updated and upgraded to improve accuracy.
iii) The session on detection technologies highlighted some techniques that can be used
to improve detection and performance metrics to benchmark detection technologies.
iv) Detection technologies need to be able to handle various types of data and accurately
identify subtle traces and signatures resulting from specific models used to produce
deepfake and AI-generated content.
v) There is a need to promote international cooperation and global dialogue on
technical standards for detection technologies based on respect for cultural diversity,
transparency, safety, and security.
Verifying multimedia provenance and authenticity – secure metadata, watermarking,
and fingerprinting
i) Tools to establish digital asset provenance and authenticity will be an important part
of solutions to the challenge of deepfake and AI-generated multimedia created with
malicious intent.
ii) Provenance data for a digital asset can be recorded through Content Credentials based
on an open technical standard Coalition for Content Provenance and Authenticity (C2PA).
Content Credentials are tamperproof metadata that provide information about the origin,
history, and modification of content, including whether the content was AI generated.
iii) Authenticity or provenance verification – the process of assessing content's accuracy
and consistency – can help combat misinformation and disinformation and ensure the
credibility of multimedia content.
iv) A combination of secure metadata, watermarks, fingerprinting, and secure tools for
tracking provenance history is required. C2PA; The Supply Chain Integrity, Transparency,
and Trust (SCITT) Working Group of the Internet Engineering Task Force (IETF); and the
work of Joint Photographic Experts Group (JPEG) on JPEG Trust provide mechanisms to
implement these features.
v