DOI: 10.1287/opre.2022.0162 ISSN:

Adversarial Robustness for Latent Models: Revisiting the Robust-Standard Accuracies Tradeoff

Adel Javanmard, Mohammad Mehrabi
  • Management Science and Operations Research
  • Computer Science Applications

Low-dimensional structure of data can solve the adversarial robustness-accuracy conflict for machine learning systems.

Modern machine learning systems have demonstrated breakthrough performance in a multitude of applications. However, they are known to be highly vulnerable to small perturbations to the input data, known as adversarial attacks. There are many well-documented examples of such behavior, for example small perturbations of an image, which is imperceptible to a human, can significantly degrade performance of modern classifiers. Adversarial training has been put forward as a way to improve robustness of learning algorithms to adversarial attacks. However, this benefit often comes at the cost of decreasing accuracy on natural unperturbed inputs, pointing to a potential conflict between adversarial robustness and standard accuracy. In “Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff,” Adel Javanmard and Mohammad Mehrabi develop a theory to show that when the data enjoys low-dimensional structure, then it is possible to train models that are nearly optimal with respect to both, the standard and robust accuracies.

More from our Archive