• Home
  • News
  • Standards and policy considerations for multimedia authenticity
Standards and policy considerations for multimedia authenticity featured image

Standards and policy considerations for multimedia authenticity

By Alessandra Sala, Chair of the AI and Multimedia Authenticity Collaboration

Our understanding of creativity, truth, and integrity is undergoing a radical transformation with the rise of artificial intelligence (AI).

AI-generated and -edited content is becoming the new norm, especially among younger, AI-native communicators and consumers. The risks and rewards of this, meanwhile, can be hard to distinguish.

Synthetic media, once a novel anomaly, is now seamlessly woven into our cultural fabric – reshaping communication, democratizing access to content-creation tools, and simultaneously challenging long-standing assumptions about authenticity and trust.

The shifting content paradigm prompted a groundbreaking initiative: the AI and Multimedia Authenticity Standards Collaboration, first announced at the last year’s AI for Good Global Summit.

A call to action in the AI era

Led by the World Standards Cooperation – the partnership of the International Electrotechnical Commission (IEC), International Organization for Standardization (ISO), and the International Telecommunication Union (ITU) – this initiative unites standards developers, technology leaders, policymakers, researchers, and civil society.

Its mission is clear: to respond decisively to the risks posed by deepfakes, misinformation, and synthetic content misuse, while fostering the creative and societal benefits of AI.

This initiative aims to redefine digital integrity with an inclusive and future-oriented framework of transparency, accountability, and ethical innovation. By building a cohesive ecosystem of international standards, the collaboration is laying the foundations for a world where both creators and consumers can trust what they see, hear, and share.

Our work, furthermore, helps realize the objectives of the Global Digital Compact, adopted by the UN General Assembly as a framework for countries and industries to ensure that AI and other technologies benefit all of humanity.

Enabling alignment and clarity

We are proud to announce the publication of two landmark papers – one technical, the other more policy-focused.

These are our first major deliverables from the AI and Multimedia Authenticity Standards Collaboration. Together, these papers open the way for both technical alignment and regulatory clarity.

Technical paper:
AI and multimedia authenticity standards – Mapping the standardization landscape

This comprehensive paper presents a systematic overview of current standards and specifications at the intersection of digital media authenticity and AI.

It identifies five key clusters:

  • Content provenance
  • Trust and authenticity
  • Asset identifiers
  • Rights declarations
  • Watermarking

Each standard is briefly described with links to more detailed resources.

By charting the contributions of various standards bodies, the paper highlights existing coverage. Just as importantly, it reveals where the gaps are in today’s standards ecosystem, providing an invaluable guide for anyone navigating this complex terrain.

To design, implement, and validate efficient solutions, we need a strong understanding of the challenges at play. Studies like this are one of the ways we can foster impactful innovation that drives responsible and ethical AI across a wide range of applications.

Ultimately, our technical paper sets out to inform and inspire the next wave of standardization efforts in support of responsible innovation, rights protection, and trustworthy AI systems.

Standards capture innovation to spur more innovation. They also enable interoperability, opening global markets for new inventions. This paper – and its future iterations – should help industry players quickly find the solutions they need for multimedia authenticity.

Policy paper:
Building trust in multimedia authenticity through international standards

In a world where misinformation and disinformation spreads faster than regulation can keep up, international standards and practical tools build trust and resilience across borders.

Our policy paper is designed for policy-makers and regulators navigating the fast-changing world of synthetic multimedia. It demystifies the regulation of AI-generated and -manipulated content through a clear, structured roadmap to address prevention, detection, and response strategies.

The paper also highlights the delicate balance needed to preserve freedom of expression and innovation while protecting society from the harms of manipulated media.

Key elements include:

  • A regulatory options matrix to help define what to regulate, how, and to what extent.
  • An overview of supporting tools – like standards, conformity assessments, and enabling technologies – and their value in promoting regulatory coherence and alignment across borders.
  • Checklists to help both regulators and tech providers design regulations and enforcement mechanisms, develop resilient technologies, and prepare for crises.

With tools like these, we are equipping governments and regulators with a common, practical and scalable framework that can be applied across diverse contexts.

The path forward

As AI-generated media expands in scope and sophistication, global collaboration is essential.

Digital content can be powerful and creative. But it must also be traceable, trustworthy, and ethically produced.

Experts dedicated to multimedia authenticity are gathering critical feedback and showcasing the urgent need for technical and policy alignment. International dialogues and standards bodies have helped shape a shared understanding of risks, best practices, and opportunities for joint action.

No single organization can tackle this challenge alone. Our collaboration leverages the experience of the IEC, ISO and ITU in uniting a broad and diverse range of stakeholders.

Big tech is represented by the likes of Adobe, Microsoft, and Shutterstock.

Standards bodies such as the Content Authenticity Initiative (CAI), the Coalition for Content Provenance and Authenticity (C2PA) and the Internet Engineering Tast Force (IETF) are also involved.

So are Germany’s Frauenhofer research institute, the Swiss Federal Institute of Technology in Lausanne (EPFL) and CAICT, a technology-focused think tank based in China, as well as authentication specialists DataTrails and Deep Media, and the human rights organization Witness.

Making standards work for everyone

A broad range of stakeholders, from countries at all stages of economic development, can influence underlying technical decisions in a meaningful way. International standards that are developed inclusively will reflect real needs.

By the same token, we can only tackle issues like misinformation and disinformation by collaborating with all key players, including civil society, academic institutions, public service media and others with a vested interest in ensuring online content can be trusted.

These first two papers represent an important milestone on that journey. But they are just the beginning.

Stay tuned as our collaboration expands its reach, deepens its partnerships, and continues shaping the standards that will define the next chapter of our digital reality.

 Explore the papers on our webpage:

  • Technical paper
  • Policy paper

Read the press release

Follow the journey on LinkedIn: AI and Multimedia Authenticity Standards Collaboration

Header image credit: AdobeStock

Related content