Machine Learning antimalware models are commonly used in security for detecting novel malware samples. However, these models have blind spots. Tools to systematically discover input perturbations to evade machine learning have been repeatedly demonstrated for domains such as computer vision. Structured input such as portable executable (PE) files present a challenge because perturbations may break the file format or functionalty of the file.
This contest: modify malware samples to evade static antimalware models while preserving functionality.
This contest involves functional malicious binaries. By participating in this contest you agree to the terms of service.