Information Integrity

Objective 3: Foster an inclusive, open, safe and secure digital space that respects, protects and promotes human rights





35 (e) Encourage United Nations entities, in collaboration with Governments and relevant stakeholders, to assess the impact of misinformation and disinformation on the achievement of the Sustainable Development Goals


36 (a) Call on digital technology companies and social media platforms to enhance the transparency and accountability of their systems, including terms of service, content moderation and recommendation algorithms and handling of users’ personal data in local languages, to empower users to make informed choices and provide or withdraw informed consent;


36 (b) Call on social media platforms to provide researchers access to data, with safeguards for user privacy, to ensure transparency and accountability to build an evidence base on how to address misinformation and disinformation and hate speech that can inform government and industry policies, standards and best practices;


36 (c) Call on digital technology companies and developers to continue to develop solutions and publicly communicate actions to counter potential harms, including hate speech and discrimination, from artificial intelligence-enabled content. Such measures include incorporation of safeguards into artificial intelligence model training processes, identification of artificial intelligence-generated material, authenticity certification for content and origins, labelling, watermarking and other techniques.