Page 13 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 13
Detecting deepfakes and generative AI: Report on standards for AI
watermarking and multimedia authenticity workshop
content used by generative AI models, for example as training data). On the output side, in most
jurisdictions, whether or not AI-generated content infringes the copyright held by the creator
of a pre-existing work in most jurisdictions will depends on the degree of the AI-generated
content's similarity to the copyright-protected work, which can only be assessed on a case-
by-case basis. On the input side, copyright holders argue that the unauthorized use of their
copyright-protected works to train AI models constitutes copyright infringement. The most
important question in this regard is whether such use is covered by limitations to copyright,
such as fair use in the US, or exceptions such as those that allow text and datamining, under
specific conditions, for the large-scale harvesting of creative works for automated computational
analysis. The EU, for example, has a general text and data mining exception, but also allows
copyright holders to opt out by reserving their rights in an appropriate manner.
This is also an area in which standardization could be highly beneficial, suggested workshop
panellists. Even where opt-out mechanisms exist by law, they do not yet seem operational
in practice. Their practical usefulness could increase considerably if international standards
enabled copyright holders to control the use of their works in training data efficiently and across
AI platforms. Moreover, in the interest of transparency, some jurisdictions such as the EU require
providers of AI models to disclose the content they use as training data. Such requirements
could be implemented more easily if the way in which this information should be disclosed is
further standardized.
Panellists expressed concern about the consequences of misinformation and disinformation
on political and social discourse, consequences that could become more pronounced as
technologies to create deepfakes become more sophisticated and widely available. Table
1 in Annex 2 summarizes some of the ways that that deepfakes that could affect individuals,
organizations, and society at large.
It was highlighted by Helena Leurent, Director General of Consumers International, that
legislation being considered by governments to address deepfakes should take consumer
rights into account and that the burden of proof should not lie with the consumer. As consumers
will not have access to detection tools to identify deepfakes, AI-generated content should be
clearly labelled as such.
There was general agreement among panellists that technology companies could work towards
offering tools for labelling AI-generated content and make it possible for AI systems to sound
less human to help people be certain that they are interacting with an AI system. The recently
introduced EU Artificial Intelligence Act contains some policy measures and requirements for
transparency aimed at mitigating risks related to AI-generated multimedia and deepfakes (see
Annex 1 for more information).
The panellists made clear that there will not be a single solution to the deepfake problem,
but rather a combined effort involving technology, legislation, self-regulation, transparency
and labelling, standards, education, and incentives for proper usage of AI-generated content.
Key takeaways from this session are summarized below:
i) Deepfakes are expanding rapidly, in terms of quantity, quality, and variety of impacts on
individuals, organizations, and society at large.
ii) It is generally agreed that generative AI should adhere to current legislative measures for
transparency that support purposes such as the protection of copyright.
5