Page 424 - AI for Good Innovate for Impact
P. 424
AI for Good Innovate for Impact
2�3 Future work
• Data collection
• Proof of concept development
• Model development
• Create new variations/extensions to the same use case
• Standards development related to the use case
• Set up reference tools, notebooks, and a simulation
o Completed in the first half of 2025: A diverse AIGC dataset is constructed based
on mainstream generative models to provide comprehensive data support for
subsequent content moderation tasks. An unsupervised domain adaptation strategy is
employed to extract and integrate multimodal features for the development of general
multimodal forgery detection techniques. In parallel, a multi-level safety standards
alignment mechanism is designed to balance global commonalities with regional
specificities. This hierarchical approach ensures global compliance of AIGC content
while accommodating the cultural and societal norms of different regions. Finally,
forgery detection models and content moderation models are trained.
o Completed in the second half of 2025: An AIGC moderation platform is developed
based on the proposed forgery detection and content moderation models. Leveraging
this platform, pilot applications and industrial-scale deployments are carried out in
collaboration with local public security authorities and technology enterprises in China.
The platform supports continuous data updates and iterative model improvements
based on real-world testing outcomes. Efforts are also made to propose industry
standards, promoting the widespread adoption and influence of AIGC content
moderation technologies. In addition, open-source tools, guidance documents, and
simulation settings are provided to support broader implementation and research.
388

