|
Work item:
|
X.LLMCC
|
|
Subject/title:
|
Guidelines for large language model data security based on confidential computing
|
|
Status:
|
Under study
|
|
Approval process:
|
TAP
|
|
Type of work item:
|
Recommendation
|
|
Version:
|
New
|
|
Equivalent number:
|
-
|
|
Timing:
|
2027-12 (Medium priority)
|
|
Liaison:
|
ITU-T SG21, ISO/IEC JTC 1/SC 27, IETF TEEP, IETF RATS
|
|
Supporting members:
|
Alibaba China Co., Ltd., CAICT, Vivo Mobile Communication, Guangdong OPPO Mobile Telecommunications, ZTE Corporation
|
|
Summary:
|
The widespread deployment of Large Language Model services, particularly in cloud environments, faces escalating security threats such as adversarial attacks, data leakage during computation, and model tampering. Traditional security measures inadequately address risks to "data in use," while fragmented implementations of Confidential Computing technologies lack cross-platform interoperability and trust in multi-party collaborations.
This Recommendation establishes harmonized guidelines to secure Large Language Model through Confidential Computing, focusing on system-layer isolation, encrypted model execution, and algorithm integrity. It addresses critical gaps by unifying implementations across vendors, defining attestation protocols for verifiable trust, and providing layer-specific safeguards for hardware, data, and models.
Applicable to AI system developers, model providers, and cloud service providers, the proposal standardizes performance-optimized protections for LLM workflows without overlapping with general AI security standards to enable secure LLM adoption at scale.
|
|
Comment:
|
-
|
|
Reference(s):
|
|
|
Historic references:
|
|
Contact(s):
|
|
| ITU-T A.5 justification(s): |
|
|
|
|
First registration in the WP:
2025-12-11 13:09:44
|
|
Last update:
2025-12-11 13:13:58
|
|