Page 420 - AI for Good Innovate for Impact
P. 420
AI for Good Innovate for Impact
Use Case 6: Artificial Intelligence-Generated Content Moderation
Country: China
Organization: Hangzhou Dianzi University
Department/Division/Group: School of Cyberspace, Sino-French Joint Laboratory for Digital
Media Forensics of Zhejiang Province
Contact Person(s): Tong Qiao, tong�qiao@ hdu �edu �cn, (+86)15268110192
1 Use Case Summary Table
Item Details
Category AI Security; Content Moderation; Multimedia Forensics
Problem Addressed Content moderation for AIGC, focusing on hallucinations, synthetic
content, and deviations from established standards to enhance the
security and reliability of AIGC content.
Technology Keywords Unsupervised learning, data augmentation, multi-level align-
ment mechanism, multimodal large language models, generative
models, AIGC moderation
Data Data is publicly available
Availability
Testbeds or At the Inclusion The Global Multimedia Deepfake Detection
Pilot Challenge, the detection accuracy reached approximately 95%.
Deployments
Applied in the Public Security Bureaus in China to combat telecom
fraud involving the use of artificial intelligence.
2 Use Case Description
2�1 Description
This case aims to address issues such as hallucination in large language models, the generation
of fake content, and AIGC that does not meet expected standards. For instance, the European
Commission’s AI Act (2024), the world’s first comprehensive regulation on AI, requires AI
384

