DeepPerform: An Efficient Approach for Performance Testing of Resource-Constrained Neural Networks

Abstract

Today, an increasing number of Adaptive Deep Neural Networks (AdNNs) are being used to make decisions on resource-constrained embedded devices. We observe that, similar to traditional software, redundant computations exist in AdNNs, resulting in considerable performance degradation. The performance degradation in AdNNs is dependent on the input workloads, and is referred to as input-dependent performance bottlenecks (IDPBs). To ensure an AdNN satisfies the performance requirements of real-time applications, it is essential to conduct performance testing to detect IDPBs in the AdNN. Existing neural network testing methods are primarily concerned with correctness testing, which does not involve performance testing. To fill this gap, we propose DeepPerform, a scalable approach to generate test samples to detect the IDPB of AdNNs. We first demonstrate how the problem of generating performance test samples detecting IDPBs can be formulated as an optimization problem. Following that, we demonstrate how tool efficiently handles the optimization problem by learning and estimating the distribution of AdNNs’ computational consumption. We evaluate DeepPerform on three widely used datasets against five popular AdNN models. The results show that DeepPerform generates test samples that cause more severe performance degradation (FLOPs: increase up to 552%). Furthermore, DeepPerform is substantially more efficient than the baseline methods in terms of generating test inputs (runtime overhead: only 6–10 milliseconds).

Publication
In the 37th IEEE/ACM International Conference on Automated Software Engineering.
Date