Page 23 - AI Standards for Global Impact: From Governance to Action
P. 23
AI Standards for Global Impact: From Governance to Action
• Participants emphasized that standards can be forward-looking, inclusive of diverse
stakeholders, and complementary to national legislation, particularly in sectors like
education, energy, and health, while also helping harmonize approaches to issues like
synthetic media across markets.
• Participants emphasized the need for inclusive frameworks where regulators and AI Part 1: International
legislators can participate early in the development of AI standards to bridge the gap
between technology advancement and regulation, with a focus on skills development as
a foundational element.
• Participants highlighted the challenges of aligning legislative aspirations, such as those
in the GDPR, with practical business practices, using cookie consents as an example, and
noted that this misalignment becomes even more complex with AI due to its diverse
applications and the need to balance ethical principles with technological advancement.
• Participants also noted the gap between ambitious AI principles (e.g., explainability) and
the current maturity of technologies needed to achieve them, questioning how standards
can effectively bridge this gap.
Figure 14: Bilel Jamoussi, Deputy to the Director, Telecommunication Standardization
Bureau (TSB), International Telecommunication Union (ITU)
Q2� What AI risks must be prioritized for global standardization today?
• Participants discussed prioritizing risks for AI standardization, highlighting three key
areas: information authenticity; data security throughout data lifecycles; and AI reliability
to address trust, accountability, and hallucination issues. Participants also noted that
standards often emerge naturally from industry needs alongside risk-focused efforts.
• Participants identified key risks for AI standardization, including data quality and accuracy
(especially in autonomous driving) and misinformation/deepfakes (validated as a
major concern by AI tools), and noted that while standards can provide definitions and
frameworks, they cannot address the root sources of misinformation directly.
• Participants highlighted an urgent need for global standardization at the intersection of AI
and cybersecurity, emphasizing risks related to intellectual property, critical infrastructure
(e.g. telecommunications), cybercrime, autonomous driving, and social networks, with
standards serving as a means for secure, compliant operations and governance.
11