Page 92 - AI Standards for Global Impact: From Governance to Action
P. 92
AI Standards for Global Impact: From Governance to Action
Box 3: ITU-T Study Group 17 standardization activities in AI Security
ITU-T Study Group 17 is currently working on two technical reports for AI security which
will be published following its meeting in December 2025 and will lead to new work
items on AI security. A brief outline of these two technical reports is provided below.
1) XSTR.AISec (ex TR.AISec) Artificial intelligence security standardization strategy
This presents the key outcomes of the Correspondence Group on ITU-T SG17 Strategy
for Artificial Intelligence (AI) Security (CG-AISEC-STRAT), which has developed a
comprehensive strategy for advancing AI security within ITU-T SG17. Its purpose is to
position SG17 as a leading actor in AI security standardization and promote effective
coordination with other organizations.
The document set out SG17’s strategic objectives, value proposition, SWOT analysis,
strategic directions supported by practical actions, and concrete recommendations.
It also describes approaches for communicating the strategy both within ITU-T and to
external stakeholders.
2) XSTR.se-AI (ex TR.se-AI) Security Evaluation on Artificial Intelligence Technology
in ICT
This technical report presents an AI security evaluation framework including evaluation
dimensions (data, model, and environment) and evaluation indicators for each
evaluation dimensions. It also tries to give related evaluation methods for data, models,
and environments, providing conclusive evidence and reference for evaluating the
security of AI technology. AI practitioners can use the knowledge in this report to
evaluate the features and levels of certain security categories or indicators of a certain
component of AI technology or system. This technical report can assist AI practitioners
in identifying existing issues, making improvements, and determining the most suitable
scenarios for AI technology.
This time this is about an OSI model for AI or agentic AI. The metaphor works to a certain
degree, with respect to the need for a meta model (based on mental models), a communication
protocol, a security model, an identity model, etc. But it requires, too, some new considerations
that were not there 40 years ago.
A full trust model and its agentic AI trust control plane where:
– Trust includes a number of design characteristics: security, privacy, safety, resiliency, etc.
– Human beings are involved with, for example, a stop button or a let-go button and other
scenarios,
– The trust control plane can specify trust objectives or multi-objectives to the multi-agentic
AI systems it controls.
80