Page 28 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 28

Detecting deepfakes and generative AI: Report on standards for AI
                                           watermarking and multimedia authenticity workshop



                      Annex 1: EU AI Act


                      Some governments have introduced specific legislation to address deepfakes and generative
                      AI, prevent misuse and ensure ethical AI development and deployment. Deepfakes are
                      addressed specifically under the EU AI Act due to their potential risks. The main provisions
                      regarding deepfakes include:

                      1)   Transparency Obligations:
                           •  Developers and users of deepfake technologies are required to clearly disclose that
                              the content is AI-generated. This is aimed at preventing misinformation and ensuring
                              that audiences are aware of the artificial nature of the content they are viewing. It also
                              ensures recognition of the authors whose works were used in the creation of AI.
                           •  Labelling of AI content is mandatory/suggested by classification and watermarking
                              of deepfakes.

                      2)   High-Risk Classification:
                           •  Deepfakes used in contexts that can significantly impact individuals’ rights or society
                              (e.g., political manipulation, defamation) may be classified as high-risk and thus
                              subject to stricter regulatory requirements.

                      3)   Accountability and Traceability:
                           •  Traceability and accountability in the creation and dissemination of deepfakes is to be
                              ensured. This involves maintaining records of the processes and data used to generate
                              deepfakes, enabling authorities to track their origins if necessary.

                      4)   Prohibited Uses:
                           •  Certain malicious uses of deepfakes, such as those intended for social scoring or illegal
                              surveillance, are prohibited under the Act's unacceptable risk category.

                      The AI Act also clarifies that general-purpose AI models need to put in place a policy to comply
                      with EU copyright law.
                      The EU's Code of Practice on Disinformation addresses deepfakes through fines of up to 6
                      percent of global revenue for violators. The code was initially introduced as a voluntary self-
                      regulatory instrument in 2018 but now has the backing of the Digital Services Act. The Digital
                      Services Act, which came into force in November 2022, increases the monitoring of digital
                      platforms for various kinds of misuse. Under the EU AI Act, deepfake providers would be subject
                      to transparency and disclosure requirements.



























                  20
   23   24   25   26   27   28   29   30