Page 85 - AI Standards for Global Impact: From Governance to Action
P. 85

AI Standards for Global Impact: From Governance to Action



                        purpose of agentification, which requires ceding some control to agents, and accepting
                        some level of inconsistency, in exchange for robustness, efficiency, and quality.
                   5)   LLMs are good at reasoning, they can generate code or API calls based on intent-based
                        natural language instructions, and they can, in turn, communicate intent robustly, using
                        natural language – a capability that makes multi-agentic systems future-proof, because      Part 2: Thematic AI
                        human language is itself future-proof, able to produce new concepts that never existed
                        before.
                   6)   An agent’s responsibilities, therefore, should be divided between those needing an
                        intelligent knowledge-worker-in-a-box and those requiring predefined rules applied
                        consistently. The former can be delegated to the agent’s LLM, and the latter should be
                        programmed into the coded part of the agent. A data structure is therefore needed that
                        can be operated upon by agent tools, and gets passed around between agents through
                        code, and so is not necessarily subject to LLM processing. This can be used as a reliable
                        means of transport for secrets, authorization tokens, agent cards, encryption public
                        keys, and other techniques needed to make multi-agent systems secure, consistent, and
                        trustworthy.
















                   Figure 40: Duality of LLM and code
                   7)   Automated methods exist to adjust agent control without rigid rules, avoiding complete
                        restrictions on autonomous action.
                   8)   In summary, when creating multi-agent systems, the division of labour between the LLM
                        and coded parts of agents should be kept in mind, giving people agency over our agents.
                   9)   The relationship between LLMs and AI agents is similar to that between an operating
                        system and application programs: LLMs serve as the operating system, while AI Agents
                        are like programs running on the operating system.
                   10)  Security is a make-or-break factor for the future of agent adoption. Standards can play
                        a vital role here with practical guidance and serve as the foundation for trustworthy
                        deployment.
                   11)  The challenges faced by AI agents can be analysed from the following four dimensions.
                        At the reliability dimension, the root cause of AI agents’ reliability issues is model
                        hallucination, and mitigating this problem is extremely challenging. At the safety and
                        security dimensions, AI agents face emerging attacks (e.g. prompt injection), and traditional
                        security risks (e.g., data breaches) are growing more severe as AI agents proliferate. In the
                        interoperability dimension, A2A communication demands unified interface standards. The
                        development of multi-agent systems still lacks normative frameworks for interoperability.
                        In the operations and maintenance dimension, interactions between multiple agents
                        significantly increase system complexity, reducing stability and greatly raising the difficulty
                        of operations and maintenance.
                   12)  To address the challenges, ITU-T has already taken actions and is advancing the work of
                        standard issuance and project initiation, such as defining general AI agent capabilities
                        and evaluation methods (ITU-T F.748.46) and systematically analyzing security risks
                        across agents’ perception-planning-decision-action workflows to propose lifecycle-wide
                        protection requirements (ITU-T SG17).






                                                            73
   80   81   82   83   84   85   86   87   88   89   90