Page 89 - AI Standards for Global Impact: From Governance to Action
P. 89
AI Standards for Global Impact: From Governance to Action
concerns, use cases, and requirements in a normative manner as generic guidelines for
the industry and from which the technical level can correctly scope its standards.
b) Technical: Rigorous AI security testing, hardening, and defence across an agent’s lifecycle.
c) Ecosystem: Secure communication via identity systems and decentralized mechanisms for Part 2: Thematic AI
trust in multi-agent collaboration.
12�4 Agentic Al identity management
This session focused on the identity management of agentic AI systems, a foundational
element for secure and trustworthy deployment of autonomous AI agents across sectors such
as healthcare, finance, and creative industries. The discussion explored how identity systems
can help determine who AI agents are, whom they act on behalf of, and how to verify their
authorization or delegation.
Key issues discussed:
1) Identity binding and trust infrastructure
– Use of verifiable credentials and cryptographic methods to link an AI agent to an issuer
and to the entity it represents was proposed. It was suggested that trust, more than
just binding, is needed – trust in the credential, and trust in the issuer.
– AI agents will act on behalf of humans, so their identity mechanisms must be traceable,
verifiable, and have clear legal responsibilities.
– Human ID systems work relatively well due to societal enforcement and incentives.
However, in open environments, the cost of verifying identity of AI agents must be
weighed against risks such as impersonation and social engineering attacks.
– The distinction between enterprise-level identity systems and open-web agent identity
use cases were highlighted, cautioning against a one-size-fits-all solution.
2) Delegation and permission inheritance
On how agents act on behalf of humans or organizations:
– Legacy delegation practices (e.g., password sharing) should be replaced by scoped,
revocable authorization mechanisms to avoid security risks.
– Agentic AI amplifies all the risks that apply to traditional AI, predictive AI, and
generative AI because greater agency means more autonomy and therefore less
human interaction. These risks must be addressed through both technological means
and through human accountability for testing and outcomes. A robust operational
framework for governance and lifecycle management is required.
– Technical measures are needed to track and constrain delegation chains, particularly
as agents begin interacting with other agents recursively.
– Agent accountability should be enforceable regardless of domain – consumer or
enterprise – and each delegated action should be cryptographically traceable.
3) Authentication mechanisms and technical protocols
On new protocols such as MCP and A2A for secure agent communication:
– An example of the EU Digital Identity Wallet’s Rulebook concept was shared, where
specific vertical ecosystems (e.g., health, education) define their acceptable credential
sharing policies.
– There is a need for a decentralized agent registry, calling for “agent cards” that can
securely represent an agent’s capabilities and origin.
77