ML security evasion event is based on a similar competition held at DEF CON 27 last summer
The defensive capabilities of machine learning (ML) systems will be stretched to the limit at a Microsoft security event this summer.
Along with various industry partners, the company is sponsoring a Machine Learning Security Evasion Competition involving both ML experts and cybersecurity professionals.
The event is based on a similar competition held at AI Village at DEF CON 27 last summer, where contestants took part in a white-box attack against static malware machine learning models.
Several participants discovered approaches that completely and simultaneously bypassed three different machine learning anti-malware models.
“The 2020 Machine Learning Security Evasion Competition is similarly designed to surface countermeasures to adversarial behavior and raise awareness about the variety of ways ML systems may be evaded by malware, in order to better defend against these techniques,” says Hyrum Anderson, Microsoft’s principal architect for enterprise protection and detection.
Attack and defense
The competition will consist of two different challenges. A ‘Defender Challenge’ will run from June 15 through July 23, with the aim of identifying new defenses to counter cyber-attacks.
The winning defensive technique will need to be able to detect real-world malware with moderate false-positive rates, says the team.
Next, an ‘Attacker Challenge’ running from August 6 through September 18 provides a black-box threat model.
Participants will be given API access to hosted anti-malware models, including those developed in the Defender Challenge.
Contestants will attempt to evade defenses using ‘hard-label’ query results, with samples from final submissions detonated in a sandbox to make sure they’re still functional.
The final ranking will depend on the total number of API queries required by a contestant, as well as evasion rates, says the team.
Each challenge will net the winner $2,500 in Azure credits, with the runner up getting $500 in Azure credits.
To win, researchers must publish their detection or evasion strategies. Individuals or teams can register on the MLSec website.
“Companies investing heavily in machine learning are being subjected to various degrees of adversarial behavior, and most organizations are not well-positioned to adapt,” says Anderson.
“It is our goal that through our internal research and external partnerships and engagements – including this competition – we’ll collectively begin to change that.”