Machine Learning antimalware models are commonly used in security for detecting novel malware samples. However, these models have blind spots. Tools to systematically discover input perturbations to evade machine learning have been repeatedly demonstrated for domains such as computer vision. Structured input such as portable executable (PE) files present a challenge because perturbations may break the file format or functionalty of the file.

This contest: modify malware samples to evade static antimalware models while preserving functionality.

Getting started

This contest involves functional malicious binaries. By participating in this contest you agree to the terms of service.

  1. Download the malware samples - log in to see the files
  2. Analyze available models
  3. Modify the malware samples to evade the machine learning model, while preserving functionality. Due to upload rate limiting, we highly recommend detonating samples offline in a Windows 10 x64 VM prior to submission.
  4. Submit your sample against hosted models. Instructions here.
For eligible participants, the highest scoring model with a published solution be awarded a GPU


  • opens: August 9, 2019
  • closes: October 18, 2019 anywhere on earth (AoE)

Special thanks

Powered by: