Framework aimed at helping security pros detect and remediate threats against ML systems
Amid a sharp rise in adversarial attacks against machine learning systems, Microsoft has released a new framework that it says will empower security analysts in their battle to protect AI-powered technology.
Machine learning has become an integral part of many of the applications we use every day – from the facial recognition lock on iPhones to Alexa’s voice recognition function and the spam filters in our emails.
But the pervasiveness of machine learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.
As previously reported by The Daily Swig, researchers have demonstrated how it is possible to confound machine learning algorithms with potentially devastating consequences.
Proof-of-concept exploits have included adding stickers to a stop sign to fool the computer vision system of a self-driving car into mistaking it for a speed limit sign, and hoodwinking facial recognition systems with specially crafted glasses.
Researchers tricked self-driving systems into mis-identifying a stop sign as a speed limit sign
Enter the Matrix
Microsoft said it had seen a “notable increase” in attacks against commercial machine learning systems over the last four years.
This, combined with research (PDF) indicating that organizations are gravely unprepared to detect or respond to attacks against their machine learning systems, resulted in the creation of the Adversarial ML Threat Matrix.
Developed in partnership with Mitre Corporation, which maintains the widely used ATT&CK framework, along with several tech firms including IBM, Nvidia, and Bosch, the tool is pegged as an industry-focused, open framework that “systematically organizes the techniques employed by malicious adversaries in subverting ML systems”.
New and upcoming threats
Similar to the ATT&CK framework, information security professionals can use the tabulated tactics and techniques found in the Adversarial ML Threat Matrix to improve monitoring strategies around their organization’s machine learning systems.
“The goal of the Adversarial ML Threat Matrix is to position attacks on [machine learning] systems in a framework that security analysts can orient themselves in these new and upcoming threats,” said Microsoft’s Ram Shankar Siva Kumar and Ann Johnson in a joint blog post yesterday (October 22).
Mikel Rodriguez, director of machine learning research at Mitre, added: “This framework is a first step in helping to bring communities together to enable organizations to think about the emerging challenges in securing machine learning systems more holistically.”
The Adversarial ML Threat Matrix follows Microsoft’s launch earlier this year of another ATT&CK-inspired matrix – one that focuses on the identification of weaknesses in Kubernetes, the open source cloud orchestration framework.
The Kubernetes attack matrix features nine principal tactics being used by assailants looking to gain a foothold in organizations’ cloud container infrastructure.
To learn more about this latest project, visit the Adversarial ML Threat Matrix GitHub repository.