Work item:
|
X.GenAI-FT
|
Subject/title:
|
Security guidelines for fine-tuning generative AI model
|
Status:
|
Under study
|
Approval process:
|
AAP
|
Type of work item:
|
Recommendation
|
Version:
|
New
|
Equivalent number:
|
-
|
Timing:
|
2027-03 (Medium priority)
|
Liaison:
|
-
|
Supporting members:
|
-
|
Summary:
|
Generative AI (GenAI) models offer a broad range of capabilities, including question answering, content generation, code generation, and summarization. Organizations are increasingly looking to integrate fine-tuned GenAI models into their business processes to enhance performance or address specific tasks. Similarly, service providers aim to embed fine-tuned GenAI models into service workflows to improve customer experience or deliver new services.
Fine-tuning a GenAI model involves adapting it to meet the unique requirements of particular business or service contexts, thereby ensuring optimized performance and target results. However, this process introduces a range of security challenges. These include harmful examples demonstration attacks identity shifting attacks, removal of RLHF protection attacks, model supply chain attacks, fine-tuned model skewing attacks, and bias amplification attacks.
This Recommendation aims to provide security guidelines for fine-tuning GenAI models to address these security challenges and ensure the secure development and deployment of such of models.
The main clauses of this Recommendation are as follows:
Process of fine-tuning GenAI models
Security threats to fine-tuning GenAI models
Security requirements for fine-tuning GenAI models
Security guidelines for fine-tuning GenAI models
|
Comment:
|
-
|
Reference(s):
|
|
|
Historic references:
|
Contact(s):
|
|
ITU-T A.5 justification(s): |
|
|
|
First registration in the WP:
2025-04-17 14:49:02
|
Last update:
2025-04-17 14:56:55
|
|