Page 14 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 14
Detecting deepfakes and generative AI: Report on standards for AI
watermarking and multimedia authenticity workshop
iii) Technologies for the creation of deepfakes are becoming more sophisticated and widely
available, making it more difficult to identify whether or not content is AI-generated.
iv) Governments have introduced specific legislation requiring AI-generated content to be
labelled, with the aim of addressing deepfakes and preventing AI misuse.
v) Legislation to address deepfakes should take consumer rights into account and the
burden of proof should not lie with the consumer.
vi) Technology and media companies could work towards offering tools for labelling AI-
generated content and make it possible to identify people responsible for deepfakes
with malicious intent.
vii) Deepfake is a pejorative term, but it should be recognized that there are circumstances
where AI-generated images are desirable, for example, when an actor's appearance is
altered to play a character of a different age, or people's appearances are simulated in a
virtual environment.
viii) There is a need for standards for multimedia labelling and authenticity verification and
the detection of deepfakes.
ix) New systems, often blockchain-based, are being created to provide traceability and
digital identity verification.
6