DOI: 10.1145/3617593 ISSN:

Adopting Two Supervisors for Efficient Use of Large-Scale Remote Deep Neural Networks

Michael Weiss, Paolo Tonella
  • Software

Recent decades have seen the rise of large-scale Deep Neural Networks (DNNs) to achieve human-competitive performance in a variety of artificial intelligence tasks. Often consisting of hundreds of millions, if not hundreds of billion parameters, these DNNs are too large to be deployed to, or efficiently run on resource-constrained devices such as mobile phones or IoT microcontrollers. Systems relying on large-scale DNNs thus have to call the corresponding model over the network, leading to substantial costs for hosting and running the large-scale remote model, costs which are often charged on a per-use basis. In this paper, we propose

BiSupervised
, a novel architecture, where, before relying on a large remote DNN, a system attempts to make a prediction on a small-scale local model. A DNN supervisor monitors said prediction process and identifies easy inputs for which the local prediction can be trusted. For these inputs, the remote model does not have to be invoked, thus saving costs, while only marginally impacting the overall system accuracy. Our architecture furthermore foresees a second supervisor to monitor the remote predictions and identify inputs for which not even these can be trusted, allowing to raise an exception or run a fallback strategy instead. We evaluate the cost savings, and the ability to detect incorrectly predicted inputs on four diverse case studies: IMDB movie review sentiment classification, Github issue triaging, Imagenet image classification, and SQuADv2 free-text question answering. In all four case studies, we find that
BiSupervised
allows to reduce cost by at least 30% while maintaining a similar system-level prediction performance. In two case studies (IMDB and SQuADv2), we find that
BiSupervised
even achieves a higher system-level accuracy, at reduced cost, compared to a remote-only model. Furthermore, measurements taken on our setup indicate a large potential of
BiSupervised
to reduce average prediction latency.

More from our Archive