|
The SkipSponge attack: Sponge weight poisoning of deep neural networks
|
Authors: Jona te Lintelo, Stefanos Koffas, Stjepan Picek Status: Final Date of publication: 15 September 2025 Published in: ITU Journal on Future and Evolving Technologies, Volume 6 (2025), Issue 3, Pages 247-263 Article DOI : https://doi.org/10.52953/XKBU4341
|
Abstract: Sponge attacks aim to increase the energy consumption and computation time of neural networks. In this work, we present a novel sponge attack called SkipSponge. SkipSponge is the first sponge attack that is performed directly on the parameters of a pretrained model using only a few data samples. Our experiments show that SkipSponge can successfully increase the energy consumption of image classification models, GANs, and autoencoders, requiring fewer samples than the state-of-the-art sponge attacks (Sponge Poisoning). We show that poisoning defenses are ineffective if not adjusted specifically for the defense against SkipSponge (i.e., they decrease target layer bias values) and that SkipSponge is more effective on the GANs and the autoencoders than Sponge Poisoning. Additionally, SkipSponge is stealthy as it does not require significant changes to the victim model's parameters. Our experiments indicate that SkipSponge can be performed even when an attacker has access to less than 1% of the entire training dataset and reaches up to 13% energy increase. |
Keywords: Autoencoder, availability attack, GAN, image classification, sponge poisoning Rights: © International Telecommunication Union, available under the CC BY-NC-ND 3.0 IGO license.
|
|
ITEM DETAIL | ARTICLE | PRICE | |
---|
ENGLISH
Full article (PDF) |
|
| 0
| Free of charge | DOWNLOAD |
|
| |