Update

Sep 2, 15:51 CEST: After manual validation of submitted samples and timestamps, we have a winner! Congratulations to Will for acheiving a perfect score. And congratulations to all who participated. We have been overwhelmed by the number of high quality malware evasion submissions. For all who participated, please summarize your approach and provide feedback using this Google form.

Aug 26, 17:00 CEST: Although the scoreboard previously showed a perfect winning score, as of August 26, 17:00 CEST, this is not the case. A bug in the scoring calculation added extra points for some samples. These invalid points were removed from the system. The competition is still on!

Overview

Machine Learning antimalware models are commonly used in security for detecting novel malware samples. However, these models have blind spots. Tools to systematically discover input perturbations to evade machine learning have been repeatedly demonstrated for domains such as computer vision. Structured input such as portable executable (PE) files present a challenge because perturbations may break the file format or functionalty of the file.

This contest: modify malware samples to evade static antimalware models while preserving functionality.

Getting started

This contest involves functional malicious binaries. By participating in this contest you agree to the terms of service.

  1. Download the malware samples - log in to see the files
  2. Analyze available models
  3. Modify the malware samples to evade the machine learning model, while preserving functionality. Due to upload rate limiting, we highly recommend detonating samples offline in a Windows 10 x64 VM prior to submission.
  4. Submit your sample against hosted models. Instructions here.
For eligible participants, the highest scoring model with a published solution be awarded a GPU

Timeline

  • opens: August 9, 2019
  • closes: October 18, 2019 anywhere on earth (AoE)

Special thanks

Powered by: