Why standards are key to agentic AI security and digital IDs
The more artificial intelligence (AI) can do on its own, the more humans must do to keep it on the right track.
As innovation to introduce autonomous AI agents grows in speed and scale, international standards experts aim to ensure that security, trust and digital identity keep pace.
Prompt-injection attacks and identity spoofing are among the key risks to address. But unintended AI behaviours also present a growing danger as autonomous, multi-agentic AI systems enter sectors like financial services, healthcare, and infrastructure management.
Standards experts at the International Telecommunication Union (ITU) are calling for robust controls and global trust frameworks to keep AI agents on the straight and narrow.
Arnaud Taddei, Chair of the organization’s expert group for security standards, ITU-T Study Group 17, likens the agentic AI standardization challenge to the earlier development of the Open Systems Interconnection (OSI) model as the basis for hardware and software systems that now support communication networks globally.
“The scope of the required standardization work is comparable to that of the OSI model four decades ago,” Taddei says, recalling discussions at ITU’s workshop on AI security at the last AI for Good Global Summit.
Building the trust framework
Study Group 17 is developing models for proactive cyber defence and innovative approaches to trust, trustworthiness and governance. It is also working on standards to establish secure and accountable digital identities for agentic AI.
“A few months from now, we’re going to see a wave of new solutions based on agentic AI,” says Taddei. “We have absolutely no model to secure that in an interoperable manner.”
He stresses the need for an “entire trust framework,” including an “agentic AI trust control plane” to keep humans in the loop.
Current approaches to agentic AI security are limited to bespoke solutions from various providers.
“Now is the good moment to do standardization.”
ITU-T Study Group 17 leads ITU’s work on AI security and is in the process of establishing a new working group, Question 16/17: AI Security, and merging its Questions 3/17 and 10/17 working on digital identity for agentic AI.
An ITU workshop organized by ITU-T Study Group 17 from 30 to 31 March 2026 on trustable and interoperable digital identities will explore:
- Technical approaches for trust frameworks, trust management, security, and interoperability of humans and agentic AI
- Actionable recommendations and consolidated insights to advance standardization work in the field
A follow-up ITU workshop focused on the underlying trust management framework and governance is planned for 3 June 2026 in conjunction with the plenary meeting of ITU-T Study Group 17 from 2 to 11 June 2026.
Learn more about ITU’s work on agentic AI security, trust and digital ID.
Header image credit: ITU