Not just a buzzword
Artificial intelligence (AI) may be the tech buzzword of the year, but more and more companies are turning to AI-driven products and services to help secure their networks.
Streamlining security processes are among the main reasons for AI adoption, as these new technologies look to counteract the human errors that are often ground zero for data breaches and BEC (business email compromise) scams.
Despite the Skynet doomsday warnings that inevitably come up in any conversation about AI and machine learning (ML), most think that these technologies will help – not hinder – the workflow of security teams, further automating any of the mundane pen tester tasks to produce added accuracy.
A study of over 3,000 IT security leaders last year, for example, found that a quarter of organizations were already leveraging AI and ML to help fill the defense gap that an increasing prevalence of digital devices has started to create.
And if not already using an AI or ML solution in their blue team toolkit, a quarter of organizations were planning to do so within a 12-month period, according to the research conducted by the Ponemon Institute on behalf of cybersecurity firm, Aruba.
Almost a year now since security leaders were asked about their use of AI and ML technologies, some think that the uptake and value of the tools has been severely understated, for both attack and defense scenarios.
“Well over 50% were either using or intending to use AI in their security and defense capability at that time,” said Larry Lunetta, vice president of marketing at Aruba, speaking to The Daily Swig over the phone.
“Based on our product sales and soundings in the industry, that’s probably low in terms of how people are implementing AI.”
Lunetta gives an example of a ransomware attack, where a malicious actor will, in part, hide within legitimate network traffic in order to stay off an organization’s blacklist.
ML and AI combined can help stop such activity through behavioral analysis – combing through the massive volume of traffic in order to spot when there’s something on their network that shouldn’t be there.
“You need to know what normal looks like, and what ‘not normal’ looks like in the phase of the attack,” Lunetta explained.
“Each of these [malware] families have specific profiles of how they do encryption, for instance, and we can understand that and build models.”
Companies like Aruba, which supply AI-led tools to Fortune 500 corporations and school districts alike, pair security researchers with data scientists in order to produce an effective algorithm for something like malware detection.
These algorithms are trained on data that, the company describes as, “millions of examples of right and wrong answers”, which can eventually point at malicious properties, and identify patterns – also called supervised and unsupervised machine learning, respectively.
The broader the data, the better the algorithm will operate, Lunetta said.
Malwarebytes is another firm that has been utilizing the technologies in their products for some time.
“I think we’ve definitely seen more and more vendors utilizing it, or at least claiming that they’ve utilized it,” Adam Kujawa, director of Malwarebytes Labs, told The Daily Swig.
“I don’t hear a lot about people being overly concerned about utilization and machine learning and AI because we’ve had the time to train these things to be more accurate and less likely to create false positives.”
False positives point to false attacks, and is one of the most noteworthy problems with machine learning and AI, along with false negatives – when an attack remains hidden.
“The dangers of having AI that is not deployed, or configured, correctly is that it can be manipulated by third-parties to create a bias that says an attack is not actually an attack,” Kujawa said.
“Which is why we [Malwarebytes] do most of our stuff locked down.”
Machines never sleep
Consumers are warming up to the idea of machine defenders, as well, with more than a quarter (26%) of individuals preferring cybersecurity to be managed by AI rather than a human, according to a recent study by Palo Alto Networks.
But it’s not one or the other, Lunetta said.
“Look at it as a complement,” he said.
“You’re combining the two techniques – ML and AI can cut a whole lot of white noise. By using ML as a filter you can get much less false positives, and then, conversely, you are more likely to find an attack coming from the security ecosystem with ML.”
For all the good these technologies are set to do, attackers, too, are using AI and ML to refine their techniques to create more sophisticated means of compromising an organization’s infrastructure.
This can be anything from using machine learning to auto-craft convincing spear-phishing emails, to anticipated developments in ML-powered malware.
And as attackers use AI and ML to refine their techniques – such as a convincing spear-phishing email – defenders need to be on guard.
“Organizations swim in a 24/7 attack environment,” Lunetta said.
“Simple attacks, advanced attacks the security team doesn’t see a majority of attacks, so you want to put the humans where they do the most good.
“That’s the goal of the AI approach.”