Page 88 - AI Standards for Global Impact: From Governance to Action
P. 88

AI Standards for Global Impact: From Governance to Action



                       human dependencies in multi-agent systems with excessive intervention requests, thus
                       contributing to decision fatigue or cognitive overload. This can thus lead to rushed
                       approvals, reduced scrutiny, and ultimately failure of effective human oversight.
                  23)  Agentic AI amplifies four critical AI-driven risks: Misaligned human agency due to
                       broad autonomy delegation, security threats (e.g. tool misuse, goal manipulation, or
                       communication poisoning), privacy risks from integrated multi-source data, and ethical
                       challenges (e.g. reduced accountability or compromised fairness).
                  24)  Security risks: There are multiple security risks with respect to AI agents, such as intent
                       breaking and goal manipulation, memory poisoning, tool misuse, remote code execution
                       (RCE) and code attacks, hijacking control via prompt injections, identity spoofing and
                       impersonation, misaligned behaviour (Anthropic / OpenAI).
                  25)  Privacy risks: Due to their autonomous nature, AI agents can collect and retain more data
                       than necessary, potentially without the required consent. Second, data access controls
                       are critical to ensure safe deployment of AI agents – otherwise, agents can expose
                       unintentionally sensitive data through misinterpretation of user permissions or by creating
                       unintended pathways for data exfiltration. Integrating different data sources increases the
                       inherent risk of inappropriate data exposure - data access rights should always be handled
                       with diligence.
                  26)  Key requirements for agentic AI safety: Taking a look at both the platform perspective
                       and the individual agent monitoring, one big challenge is to choose the right governance
                       design for AI agents as the number of agents operational is likely to increase very fast, in
                       a way that the traditional approach of registering all AI solutions/agents in one Global AI
                       Inventory would hardly be implementable anymore. In order not to overdo it by trying to
                       register every AI agent in a comprehensive way (as for traditional AI solutions), a lean, yet
                       effective way to manage AI agents is needed in a way that promotes safe and responsible
                       AI innovation.
                  27)  Metrics: Which data points are needed to retrieve from platform providers? In-built
                       inventories of agentic platforms to collect basic data points that are relevant for the
                       governance, assessment, and monitoring by the adopter could include: agent ID, creation
                       date, agent name, creator, owner, triggers/autonomy, last modified and deletion date,
                       connectors, link to data sources, etc.
                  28)  Challenges: Consistency of in-built inventories / reciprocity of data sharing (adopter |
                       provider).
                  29)  Multi-agent security standards follow a three-pillar model: Trust ecosystems enabled
                       by decentralized identity and consensus mechanisms, human-machine collaborative
                       governance leveraging frameworks like NIST AI RMF/ISO/IEC 42001, and end-to-end
                       lifecycle security aligned with policies (e.g., China’s AI content regulations) and industry
                       initiatives (e.g., Ant Group’s runtime security efforts).
                  30)  Agentic AI standards and protocols are key to responsible AI evolution: Interoperability
                       protocols (e.g., A2A, ACR and MCP) help ensure security, privacy, and error management
                       for agent governance. Despite rapid evolution and alignment with existing standards (e.g.,
                       ISO 42001), these protocols need institutional/policy guidance. A collaborative framework
                       integrating fragmented regulations, governance systems, and protocol ecosystems (e.g.,
                       open agentic schema framework (OASF)) can help build a secure foundation via AI trust,
                       agent inventory, LLM guardrails, monitoring/logging, and communication protocols.


                  12�3  Key takeaways

                  Multi-agent security standards should start with single-agent governance, focusing on three
                  levels:

                  a)   Governance: Human-machine collaboration for risk control, with early consideration of
                       unforeseen agent interactions. This introduces the concept of Enterprise AI Governance
                       which restricts the scope of AI governance to a place where it is possible to create




                                                           76
   83   84   85   86   87   88   89   90   91   92   93