Page 75 - AI Standards for Global Impact: From Governance to Action
P. 75
AI Standards for Global Impact: From Governance to Action
videos removed were done so through automated technology. Some of the methods and
technologies that support these efforts include:
• Computer vision models that can identify objects such as weapons. Audio banks that help
detect sounds that are similar or modified versions of audio Part 2: Thematic AI
• Text-based models review written content like comments or hashtags and natural
language processing is used to interpret the context surrounding the content, for example
to determine whether or not words constitute hate speech
• LLMs are also used to scale and improve content moderation, for example with some of
these models able to extract specific misinformation claims in videos for moderators to
assess
In the context of content moderation and tackling misinformation, in particular, TikTok aims to
set firm quality benchmarks for new enforcement technologies, taking a gradual approach to
rolling out new models in partnership with experts.
Umanitek highlighted how its work can complement and scale the work of fact-checkers,
as well as the gaps and challenges they face today. Given the rapid evolution of deepfakes,
said Umanitek, there is a need to evolve beyond fragmented fact-checking tools and build
infrastructure-level trust systems that allow fact-checkers, platform and content providers, and
NGOs to coordinate effectively. A demo of the tool Umanitek is working on and how it can help
fact-checkers was provided during the session.
Ant Group highlighted the challenges deepfakes pose in financial services, especially in the
context of online eKYC (electronic Know Your Customer), where cameras can be hacked to inject
a fake image of a face. Ant Group shared insight on the tools it is using to verify the authenticity
of the image and other information provided during the eKYC process.
63