Proof that bombarding users with too much – and sometimes conflicting – security advice doesn’t always work
ANALYSIS A systematic evaluation of security and privacy advice on the web has cast blame on the industry for failing to prioritize defensive security.
A comprehensive and rigorous study by academics identified 374 unique recommended behaviors contained within 1,264 documents of online security and privacy advice.
These 374 tips – billed as the first comprehensive taxonomy – were evaluated by 1,586 users and 41 professional security experts. Analysis revealed that although the advice was reasonably practical and understandable, it still failed to cut through due to the lack of prioritization.
“The majority of advice is perceived by the most users to be at least somewhat actionable, and somewhat comprehensible,” a paper (PDF) by the researchers states. “Yet, both users and experts struggle to prioritize this advice.”
“For example, experts perceive 89% of the hundreds of studied behaviors as being effective and identify 118 of them as being among the ‘top five’ things users should do, leaving end users on their own to prioritize and take action to protect themselves,” it adds.
The research – put together by a combined team from the University of Maryland; University of California, San Diego; and Rutgers University – was presented at the recent Usenix conference, where it won a distinguished paper award.
Dr Elissa Redmiles, a researcher at Microsoft Research and the University of Maryland who led the study, told The Daily Swig that a failure to account for context was a clear shortcoming of current security advice.
“None of the advice we looked at made clear the threat model that it protected against,” Redmiles explained.
Measure for measure
End user decisions are still leading to significant security risks from phishing and other threats despite advances in security best practices.
The research also considered the cumulative ecosystem of security behavior messaging and its effect on users.
“Experts should rigorously measure the impact of suggested behaviours on users’ risk and ruthlessly identify only the minimal set of highest impact, most practical advice to recommend,” the researchers argue.
Rather than attempting to make experts agree, there should be a focus on creating a “community norm to follow best practice guidelines,” according to Redmiles (being explicit about threat models and degree of risk, for example).
This could lead to the creation of a resource where experts can check what they want to recommend against a centralized system.
In healthcare, there are organizations responsible for evaluating the evidence and making recommendations, such as the US Centers for Disease Control and Prevention (CDC). There’s a lack of a comparable agency when it comes to cybersecurity hygiene.
“NIST [the National Institute of Standards and Technology] attempts to play this role currently, but does not focus heavily on consumers,” according to Redmiles.
Too complicated, already
Richard Meeus, director of security technology and strategy at Akamai, said that the report “clarifies a lot of current beliefs around internet and computer security messaging to the general populace – it is confusing, it is vast, and it sometimes contradicts itself”.
For example, some experts advise frequent password change, while others advise less frequent change because if users are obliged to switch passwords frequently, they are more likely to use less complex, and therefore predictable, login credentials.
Boris Cipot, senior security engineer at Synopsys, said that practices, advice, and policies need to be “easy to follow and easy to apply”.
“Automation and simplicity are the best ways for users to follow advice,” Cipot said. “For everything else there has to be a clear understanding around why certain tasks have to be carried out or why certain actions are not allowed (like clicking on links or attachments in emails, opening spam messages).”
David Shrier, program director for Oxford Cyber Futures, Said Business School, told The Daily Swig that the study illustrated the “broader issue of data and security literacy”.
“Most people don’t have the necessary context to judge and apply the advice being given to them,” he said.
Shrier compared security advice to the terms and conditions of software usage that users are often confronted with – and almost invariably ignore.
“People don’t know how to evaluate what is being presented, nor do they want to spend the time,” Shrier observed.
Security experts should focus on reducing the “cognitive burden on people when they attempt to prioritise”.
Tricks to address this gap in understanding might involve developing a standardized visual taxonomy – a “set of four or five icons that represent different critical concepts, that could be presented alongside the actual recommendations”, Shrier concluded.