Tool aims to help developers secure their networks

IBM has launched an open source toolkit designed to make artificial intelligence systems more robust.

The security toolbox, which was launched at the RSA conference last week, will aim to help devs protect their systems from malicious threats, including the manipulation of Deep Neural Networks (DNNs).

DNNs are complex learning machines that can spot patterns and non-linear relationships, and also contain many hidden layers.

As IBM explained in a blog post, a DNN can be exploited using ‘adversarial examples’ – inputs that have been deliberately modified to produce a desired output.

Therefore, if a malicious actor wanted to confuse an AI system, they can do so by altering data to trick the network into providing a different response.

IBM noted that this poses a serious threat to the future of AI and told how its Adversarial Robustness Toolbox aims to block this kind of attack.

It also hopes that developers will use the tool to create software to further guard against this vulnerability.

The toolbox, written in Python, uses complex algorithms to create adversarial examples, as well as providing the tools to protect against them.

IBM CTO Sridhar Muppidi said: “So far, most libraries that have attempted to test or harden AI systems have only offered collections of attacks.

“While useful, developers and researchers still need to apply the appropriate defenses to actually improve their systems.”