Page 19 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 19
Detecting deepfakes and generative AI: Report on standards for AI
watermarking and multimedia authenticity workshop
ii) Standardization of algorithms, models, and other techniques – for example, the
standardization of datasets, technologies to detect deepfake and AI-generated content,
and ability to adapt to new types of deepfake sand detection techniques.
iii) Standardization of hardware – for example, the hardware acceleration technology of a
graphics processing unit or dedicated hardware accelerators that support fast and efficient
data processing to achieve real-time detection and the integration and processing of
multimodal data, as well as performance benchmarking.
Wang Ce, Project Manager at the China Mobile Research Institute – on behalf of Zhang Chen,
CTO of the Security Department of the China Mobile Design Institute – presented the different
strategies for improving generalization ability for deepfake detection tools (tools able to detect
deepfakes created using different techniques). Some of the strategies presented included
data enhancement, adversarial training, adversarial attack, self-supervised learning, multi-
task learning, and image reconstruction. With the introduction of text-to-video generative AI
models and the development of related technologies, deepfake techniques will not be limited
to local replacement and modification, becoming able to synthesize realistic entire scenes. The
authenticity of multimedia will be more difficult to discern as a result.
Figure 6: How to improve generalization ability of deepfake detection
Jonghyun Woo, CEO of DualAuth and President of the Passwordless Alliance, gave a
presentation focused on how to protect AI systems and training data from deepfakes. The
presentation offered an introduction to two ITU international standards, Recommendation
ITU-T X.1280 "Framework for out-of-band server authentication using mobile devices" and
Recommendation ITU-T X.1220 "Security protection for storage protection against malware
attacks on hosts". X.1280 can be used to authenticate the AI system that the user is connecting
to and X.1220 can protect the training data model from unauthorised access, tampering, or
malware affecting the integrity of the training dataset.
11