Committed to connecting the world

  •  
Submarine cables

ITU-T work programme

[2025-2028] : [SG 17] : [WP4/17]

[Work programme]
Work group: Q16/17 (Presentation Web page is available here)
Title: Artificial Intelligence (AI) security
Description: Artificial Intelligence (AI) and Machine Learning (ML) are rapidly transforming telecommunication and ICT systems, bringing unprecedented efficiency and capability. However, this integration creates complex and evolving security challenges, impacting system integrity, data confidentiality, operational continuity, and public trust. Misuse, unintended behaviour, and systemic vulnerabilities demand urgent attention. Innovative AI/ML paradigms such as agentic AI, physical AI, multi-agent systems, embedded AI and embodied AI systems (robots, drones) are reshaping ICT operations and autonomous decision-making. These advances create unique threat surfaces requiring dedicated, AI-native security strategies. In particular, Embodied AI introduces physical-world risks through autonomous interaction, while embedded AI presents challenges in resource-constrained environments where security must remain lightweight, resilient and context-aware. While safeguards are essential in the use of AI and ML, they are best addressed as a derived attribute—achieved through integrated approaches to security, dependability, and risk management across the AI/ML lifecycle. This framing supports SG17’s mandate and ensures that risks from AI system failures, misuse, or adversarial exploitation are systematically mitigated. AI agents can sense and respond to their environment, taking actions that drive toward defined goals. Agentic AI systems can operate autonomously with goal‑directed behaviour, perceiving their surroundings, reasoning about conditions, planning strategies, and proactively executing actions to achieve objectives. They can minimize human intervention and coordinate seamlessly across multiple tools, agents, and data sources. Agent‑to‑agent communication protocols can enable two or more AI agents to exchange information, coordinate actions, and negotiate decisions to accomplish individual or shared goals in distributed or multi‑agent environments. Open model communication protocols can standardize how AI models and agents interact with external tools, services, and data sources, ensuring interoperability across ICT ecosystems. To further these goals, this Question studies how AI can bolster security measures, how secure AI systems and AI-based applications can be achieved in support of telecommunications/ICTs, and how to counteract the growing threat landscape fuelled by AI advances. It also guides the development of a dynamic AI/ML security roadmap, produce practical toolkits for implementation and evaluation, and promote harmonization across ITU-T Study Groups in relation to Telecommunications/ICTs, as well as alignment with external standards organizations. It also supports the specification of security controls and best practices to strengthen trustworthiness and foster innovation. To support the secure deployment of agentic AI systems, this Question explores an OSI-like architectural model for AI, featuring a dedicated agentic AI security and trust control plane. This plane enables dynamic, context-aware authorization and governance of AI actions, helping ensure safe, transparent operation aligned with human-defined objectives and policy constraints. This Question also considers the four complementary dimensions of AI security: – Security of AI: Protecting AI systems from threats such as model poisoning, adversarial attacks, and unauthorized access. – Security through AI: Leveraging AI technologies to enhance cybersecurity capabilities, including threat detection, response, and risk assessment. – Security against AI misuse and abuse: Addressing risks posed by adversarial or criminal exploitation of AI technologies or AI-enabled cyber-attacks. – Security in AI-enabled applications: focusing on the emerging security risks and vulnerabilities that arise when AI technologies are integrated into specific sectors—such as healthcare, finance, transportation, and manufacturing—where domain-specific threats may be introduced or amplified. A lifecycle-based and holistic approach is emphasized, covering stages such as model design, training, evaluation, deployment, operation, and retirement. At each phase, tailored security controls and mitigations should be applied. Roles and responsibilities of stakeholders—including AI developers, operators, service providers, and end users—should be clearly defined, particularly regarding the protection of personally identifiable information (PII) in AI environments. Artificial Intelligence (AI) and Machine Learning (ML) are rapidly transforming telecommunication and ICT systems, bringing unprecedented efficiency and capability. However, this integration creates complex and evolving security challenges, impacting system integrity, data confidentiality, operational continuity, and public trust. Misuse, unintended behaviour, and systemic vulnerabilities demand urgent attention. Innovative AI/ML paradigms such as agentic AI, physical AI, multi-agent systems, embedded AI and embodied AI systems (robots, drones) are reshaping ICT operations and autonomous decision-making. These advances create unique threat surfaces requiring dedicated, AI-native security strategies. In particular, Embodied AI introduces physical-world risks through autonomous interaction, while embedded AI presents challenges in resource-constrained environments where security must remain lightweight, resilient and context-aware. While safeguards are essential in the use of AI and ML, they are best addressed as a derived attribute—achieved through integrated approaches to security, dependability, and risk management across the AI/ML lifecycle. This framing supports SG17’s mandate and ensures that risks from AI system failures, misuse, or adversarial exploitation are systematically mitigated. AI agents can sense and respond to their environment, taking actions that drive toward defined goals. Agentic AI systems can operate autonomously with goal‑directed behaviour, perceiving their surroundings, reasoning about conditions, planning strategies, and proactively executing actions to achieve objectives. They can minimize human intervention and coordinate seamlessly across multiple tools, agents, and data sources. Agent‑to‑agent communication protocols can enable two or more AI agents to exchange information, coordinate actions, and negotiate decisions to accomplish individual or shared goals in distributed or multi‑agent environments. Open model communication protocols can standardize how AI models and agents interact with external tools, services, and data sources, ensuring interoperability across ICT ecosystems. To further these goals, this Question studies how AI can bolster security measures, how secure AI systems and AI-based applications can be achieved in support of telecommunications/ICTs, and how to counteract the growing threat landscape fuelled by AI advances. It also guides the development of a dynamic AI/ML security roadmap, produce practical toolkits for implementation and evaluation, and promote harmonization across ITU-T Study Groups in relation to Telecommunications/ICTs, as well as alignment with external standards organizations. It also supports the specification of security controls and best practices to strengthen trustworthiness and foster innovation. To support the secure deployment of agentic AI systems, this Question explores an OSI-like architectural model for AI, featuring a dedicated agentic AI security and trust control plane. This plane enables dynamic, context-aware authorization and governance of AI actions, helping ensure safe, transparent operation aligned with human-defined objectives and policy constraints. This Question also considers the four complementary dimensions of AI security: – Security of AI: Protecting AI systems from threats such as model poisoning, adversarial attacks, and unauthorized access. – Security through AI: Leveraging AI technologies to enhance cybersecurity capabilities, including threat detection, response, and risk assessment. – Security against AI misuse and abuse: Addressing risks posed by adversarial or criminal exploitation of AI technologies or AI-enabled cyber-attacks. – Security in AI-enabled applications: focusing on the emerging security risks and vulnerabilities that arise when AI technologies are integrated into specific sectors—such as healthcare, finance, transportation, and manufacturing—where domain-specific threats may be introduced or amplified. A lifecycle-based and holistic approach is emphasized, covering stages such as model design, training, evaluation, deployment, operation, and retirement. At each phase, tailored security controls and mitigations should be applied. Roles and responsibilities of stakeholders—including AI developers, operators, service providers, and end users—should be clearly defined, particularly regarding the protection of personally identifiable information (PII) in AI environments.
Comment: -
Co-rapporteur: Mr.OscarAVELLANEDA
Co-rapporteur: Mr.XiongweiJIA
Co-rapporteur: Mr.Jae HoonNAH
Associate rapporteur: Ms.NayingHU
Associate rapporteur: Mr.KeundugPARK
Associate rapporteur: Mr.MarcosTZANNES