Businesses that fail to plan for a cyber incident can pay a heavy price, but more examples of good practice are emerging
ANALYSIS According to many within the security industry, it’s not a question of if, but when you will be hacked. Disheartening as this may be, an organization’s continued prosperity – or even its survival – will depend on its actions post-breach.
Increasingly, large businesses and public sector bodies do this by turning to a cyber response incident plan, or playbook. This will cover the threats they face and how the organization should react in the face of a cyber-attack.
Ideally, these scenarios will be tested through table-top exercises and simulations, and potentially red team exercises, with pen testers acting as a hostile group, breaking through the target organization’s defenses and analyzing how it responds.
These cyber ‘war games’ are effective, but they are also expensive and disruptive to an organization’s core business. This is especially the case if production systems have to be taken down or the playbook suggests emulating a live breach.
There have been cases where a test has been mistaken for a live incident, with considerable unintended consequences. But the consequences of failing to plan can be greater still.
“A cyber incident response plan (IRP) prepares an organization so that a coherent and coordinated response can limit the impact and help take the sting out of a cyber incident,” says John Higginson, head of incident preparedness at security consultants Context Information Security.
“Having a plan avoids panic if an incident occurs. [It also] enhances an organization’s ability to notify and triage incidents sooner and, more importantly, deal with them expeditiously.
“Any direct losses should be minimized, whilst the indirect impacts or risks of potential regulatory fines and brand reputational damage or customer dissatisfaction are also reduced.”
This, though, requires developing an effective plan, testing it, and keeping it up to date.
Just as businesses have found with disaster recovery planning – and cyber incidents can play out in similar ways – plans need to be revised frequently, to account for changes in the organization’s operations and changes in the threat landscape.
And the plan needs testing. To borrow a phrase from the military, ‘no plan survives contact with the enemy’. The organization needs to be able to respond to a dynamic and often fast-developing situation.
Testing a cyber incident response is less about the specifics of what the organization needs to do, than how effectively people and systems operate under pressure, whether the command and control systems work, and how quickly the situation returns to normal.
“Organizations that respond constructively to cyber incidents tend to have more prepared and less chaotic responses to incidents, says Dr Sanjana Mehta, head of market research strategy for EMEA at security certification organization, (ISC)2.
“This is because they have previously invested time in understanding vulnerabilities, conducting simulated exercises and developing response plans. Planning and preparation are common attributes of successful organizations. This includes planning for communication following an incident.”
In the eye of the storm
Examples abound of businesses that failed to plan for a cyber incident and paid a heavy financial and reputational price.
But more examples of good practice are emerging.
Norsk Hydro, which suffered a ransomware attack in March this year, gained plaudits for its openness about the incident – even though the attack could end up costing the business up to $75 million.
The company even released a series of fly-on-the-wall videos showing how it responded to the incident in real-time.
Web infrastructure and security company Cloudflare also took steps to reassure its customers following an incident in July that was initially rumored to be a cyber-attack, but transpired to be a networking issue.
“At the very beginning we were concerned it might be some sort of attack, but our DDoS team was quickly able to dismiss that idea and our own monitoring identified the WAF (web application firewall) as the component at fault,” John Graham-Cumming, Cloudflare’s CTO, told The Daily Swig over email.
“We got the situation under control in 27 minutes, and as we haven’t had a global outage for many years this felt fast at the time. But as we looked into the actual incident, we realized that we were rusty in our major incident process,” he concedes.
“A lot could be done to tighten up procedures leading a much shorter outage should something like this occur many years into the future.”
According to the CTO, although being open is not always easy, it is essential.
“The only way to build trust is to be honest and not obfuscate,” he said.
YOU MIGHT ALSO LIKE Breach detection and response still a challenge for businesses