DOI: 10.3390/math11163613 ISSN:

ADDA: An Adversarial Direction-Guided Decision-Based Attack via Multiple Surrogate Models

Wanman Li, Xiaozhang Liu
  • General Mathematics
  • Engineering (miscellaneous)
  • Computer Science (miscellaneous)

Over the past decade, Convolutional Neural Networks (CNNs) have been extensively deployed in security-critical areas; however, the security of CNN models is threatened by adversarial attacks. Decision-based adversarial attacks, wherein an attacker relies solely on the final output label of the target model to craft adversarial examples, are the most challenging yet practical adversarial attacks. However, existing decision-based adversarial attacks generally suffer from poor query efficiency or low attack success rate, especially for targeted attacks. To address these issues, we propose a query-efficient Adversarial Direction-guided Decision-based Attack (ADDA), which exploits the advantages of transfer-based priors and the benefits of a single query. The transfer-based priors provided by the gradients of multiple different surrogate models can be utilized to suggest the most promising search directions for generating adversarial examples. The query consumption during the ADDA attack is mainly derived from a single query evaluation of the candidate adversarial samples, which significantly saves the number of queries. Experimental results on several ImageNet classifiers, including l∞ and l2 threat models, demonstrate that our proposed approach overwhelmingly outperforms existing state-of-the-art decision-based attacks in terms of both query efficiency and attack success rate. We show case studies of ADDA against a real-world API in which it is successfully able to fool the Google Cloud Vision API after only a few queries.

More from our Archive