ITU's 160 anniversary

Connecting the world and beyond

Exploring the benefits of differentially private pre-training and fine-tuning for table transformers

Exploring the benefits of differentially private pre-training and fine-tuning for table transformers

Authors: Xilong Wang, Pin-Yu Chen
Status: Final
Date of publication: 15 September 2025
Published in: ITU Journal on Future and Evolving Technologies, Volume 6 (2025), Issue 3, Pages 237-246
Article DOI : https://doi.org/10.52953/LPXP4923
Abstract:
For machine learning with tabular data, a table transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy. In this paper, we explore the benefits of combining these two aspects together in the scenario of transfer learning, differentially private pretraining and fine-tuning of TabTransformers with a variety of Parameter-Efficient Fine-Tuning (PEFT) methods, including adapter, LoRA, and prompt tuning. Our extensive experiments on four ACS datasets with different configurations show that these PEFT methods outperform traditional approaches in terms of the accuracy of the downstream task and the number of trainable parameters, thus achieving an improved trade-off among parameter efficiency, privacy, and accuracy.

Keywords: Differential privacy, table transformer, transfer learning
Rights: © International Telecommunication Union, available under the CC BY-NC-ND 3.0 IGO license.
electronic file
ITEM DETAILARTICLEPRICE
ENGLISH
PDF format   Full article (PDF)
Free of chargeDOWNLOAD