Information Integrity
Objective 3: Foster an inclusive, open, safe and secure digital space that respects, protects and promotes human rights
35 (a) Design and roll out digital media and information literacy curricula to ensure that all users have the skills and knowledge to safely and critically interact with content and with information providers and to enhance resilience against the harmful impacts of misinformation and disinformation;
35 (b) Promote diverse and resilient information ecosystems, including by strengthening independent and public media and supporting journalists and media workers;
35 (c) Provide, promote and facilitate access to and dissemination of independent, fact-based, timely, targeted, clear, accessible, multilingual and science-based information to counter misinformation and disinformation;
35 (d) Promote access to relevant, reliable and accurate information in crisis situations, to protect and empower those in vulnerable situations;
35 (e) Encourage United Nations entities, in collaboration with Governments and relevant stakeholders, to assess the impact of misinformation and disinformation on the achievement of the Sustainable Development Goals
36 (a) Call on digital technology companies and social media platforms to enhance the transparency and accountability of their systems, including terms of service, content moderation and recommendation algorithms and handling of users’ personal data in local languages, to empower users to make informed choices and provide or withdraw informed consent;
36 (b) Call on social media platforms to provide researchers access to data, with safeguards for user privacy, to ensure transparency and accountability to build an evidence base on how to address misinformation and disinformation and hate speech that can inform government and industry policies, standards and best practices;
36 (c) Call on digital technology companies and developers to continue to develop solutions and publicly communicate actions to counter potential harms, including hate speech and discrimination, from artificial intelligence-enabled content. Such measures include incorporation of safeguards into artificial intelligence model training processes, identification of artificial intelligence-generated material, authenticity certification for content and origins, labelling, watermarking and other techniques.