Generating Adversarial Examples with Program Transformations: Practical Attacks to Machine Learner.


Learning-based malware detectors may be erroneous due to two inherent limitations. First, there is a lack of differentiability: selected features may not reflect essential differences between malware and benign apps. Second, there is a lack of comprehensiveness: the used machine learning (ML) models are usually based on prior knowledge of existing malware (i.e., training dataset) so malware can evolve to evade the detection. There is a strong need for an automated framework to help security analysts to detect errors in learning-based malware detection systems. Existing techniques to generate adversarial samples for learning-based systems (that take images as inputs) employ feature mutations based on feature vectors. Such techniques are infeasible to generate adversarial samples (e.g., evasive malware) for malware detection systems because the synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads.

Bloomington, IN