Page 90 - AI Standards for Global Impact: From Governance to Action
P. 90
AI Standards for Global Impact: From Governance to Action
4) Standardization pathways
There was a constructive divergence on whether standardization should begin now or follow
industry convergence:
– Some participants advocated for formal, multi-stakeholder standardization process to
establish consensus on requirements.
– Others advocated for a more flexible, modular approach that allows for multiple
architectures to evolve simultaneously.
Key takeaways:
1) Identity management for AI agents is technically feasible but requires a trust-based
framework, especially in open or cross-domain contexts.
2) Delegation of authority should be scoped, traceable, and revocable, supported by tools
that prevent excessive permission propagation.
3) Agent registration and “Agent Card” mechanisms should be developed to support
protocols like MCP and A2A.
4) Cross-sector multi-stakeholder coordination is urgently needed to prevent fragmentation
and to ensure that AI agents operate securely and transparently (avoid repeating the
mistakes of fragmented human identity systems).
5) ITU and other standards bodies are encouraged to support early-stage coordination, such
as shared requirement frameworks or best practices.
12�5 Interplay of Al and cybersecurity: The good, the bad, and the ugly
This session discussed how AI is a double-edged sword transforming both defence and attack
sides of cybersecurity. The interaction between AI and cybersecurity is a complex and evolving
landscape, encompassing positive advances, potential threats, and ethical challenges.
Key issues discussed:
a) From a developing country perspective, AI development faces significant challenges
primarily rooted in resource constraints and data scarcity. Like shadow IT, "shadow AI" is
crippling in organizations, and this causes a big risk. Testing and data quality should be
two key considerations for AI standardization.
b) Synthetic AI is reaching 50% of content production in 2025 and AI may soon pass the
Turing test. Systematic solution needs three layers: (1) Technology layer: digital signatures
or watermark should be proposed to add to the content. (2) Application layer: the platform
has the liability to check the content. (3) Social layer: more verified content channels should
be provided.
c) Tools alone may become obsolete in 1-2 years, much like how we must continuously adapt
to combat viruses. This underscores the need to keep pace with defensive measures.
d) The quality of the resulting AI model is directly tied to the quality of the input data -
poor data yields flawed models. Therefore, data cleaning is a critical challenge. The
session discussed he paradox of AI reviewers writing reviews with the issues of directing
instructions, say to avoid certain terms in some situations that a human would not interpret
in the same way.
e) AI as an existential threat to some areas (e.g. justice as a whole) and legal departments are
facing a predominance of hallucination attacks. There are three types of hallucinations on
the name, the precise, and the reason for a case. As presenting the case and the evidence
comes with liabilities, these systematic attacks risks impacting the credibility of the justice
systems which risks could spiral down to cause distrust from society.
78