SlothSpeech: Denial-of-service Attack Against Speech Recognition Models

Abstract

Deep Learning (DL) models have been popular nowadays to execute different speech-related tasks, including automatic speech recognition (ASR). As ASR is being used in different real-time scenarios, it is important that the ASR model remains efficient against minor perturbations to the input. Hence, evaluating efficiency robustness of the ASR model is the need of the hour. We show that popular ASR models like Speech2Text model and Whisper model have dynamic computation based on different inputs, causing dynamic efficiency. In this work, we propose SlothSpeech, a denial-of-service attack against ASR models, which exploits the dynamic behavior of the model. SlothSpeech uses the probability distribution of the output text tokens to generate perturbations to the audio such that the efficiency of the ASR model is decreased. We find that SlothSpeech-generated inputs can increase the latency up to 40X times the latency induced by benign input.

Publication
In the 24th Interspeech Conference.
Date