Deep Learning Systems (DLSs) have seen widespread adoption in various real-time applications. However, the deployment of these systems in resource-constrained environments, such as mobile and IoT devices, introduces critical efficiency challenges. To mitigate computational overhead, Dynamic Deep Learning Systems (DDLSs) adapt inference computation based on input complexity. While these systems offer improved efficiency, they also introduce novel attack surfaces, efficiency adversarial attacks, that strategically exploit dynamic mechanisms to degrade the performance of the system.This paper systematically explores efficiency robustness in DDLSs, presenting the first comprehensive taxonomy of efficiency attacks. We categorize these attacks based on three dynamic behaviors: (i) attacks on dynamic computations per inference, (ii) attacks on dynamic inference iterations, and (iii) attacks on dynamic output production for downstream tasks. Through an in-depth evaluation, we analyze adversarial strategies that target DDLSs efficiency and identify key challenges in securing these systems. In addition, we investigate existing defense mechanisms, demonstrating their limitations and highlighting the need for improved robustness. Our findings highlight that while efficiency attacks are becoming increasingly popular, current defenses remain inadequate, necessitating novel mitigation strategies for securing future adaptive DDLSs.