First firm to develop strong artificial intelligence more likely to be hacked senseless by rivals

The bad guys may not have abused artificial intelligence (AI) and machine learning as yet but abuse of the related technologies in malware or phishing attacks is imminent, according to F-Secure’s chief research officer Mikko Hypponen.

Use of machine learning technologies would allow crooks to release self-modifying code that changes in order to avoid detection, for example.

Security firms have been using machine learning techniques for years on the defensive side.

It’s absence on the offensive side as yet is likely down to a “skills gap”, according to Hypponen.

“If you’re an AI expert you make a great living selling services to legitimate businesses without resorting to criminality,” Hypponen explained, adding that this situation is likely to change as the technology becomes more developed and accessible.

Initiatives like TensorFlow – the open source machine learning framework – are likely to accelerate developments in AI, Hypponen said.

But developments in AI and machine learning could increase rather than decrease the likelihood of conflict, Hypponen warned.

The security expert did not fear the rise of Terminator-style killer robots but was more concerned to the possibility that rival companies, or perhaps countries, would attack enterprises on the cusp of releasing technology based on strong AI.

Hypponen mused: “What would happen if a company got close? Other companies would want to either steal or destroy it.

“AI will most likely increase the level of conflict,” he concluded.

Hypponen made his comments at an F-Secure press event in London this week.


RELATED The next arms race: Cyber threats pulled into stark focus at Black Hat Asia