An upcoming US Supreme Court ruling may bring some much-needed clarity to the Computer Fraud and Abuse Act, but some security pros are arguing for more sweeping reforms
ANALYSIS Progress in reducing any hostility that security researchers face for breaking into networks, applications, and devices for the benefit of users and businesses has been uneven.
On the one hand, awareness of the existence and merits of white hat hackers has grown in the nearly 10 years since US bug bounty platforms HackerOne and Bugcrowd were founded.
On the other, US courts are still wrangling over how to interpret anti-hacking legislation passed three years before the world’s first commercial provider of dial-up internet access emerged.
As The Daily Swig explored recently, a forthcoming Supreme Court ruling could settle that argument for a generation, but this is unlikely to satisfy advocates of more fundamental reform.
The US’s highest court agreed in April to rule on the case of a Georgia State police officer, Nathan Van Buren, who ran a license plate search allegedly in return for payment.
Rebecca Jeschke, media relations director and digital rights analyst at the Electronic Frontier Foundation (EFF), had previously urged the Supreme Court to “stop dangerous overbroad interpretations of” the Computer Fraud and Abuse Act (CFAA), which prohibits accessing a computer “without authorization” or exceeding “authorized access”.
“Logging into your spouse’s bank account, checking your personal email on your work computer, or sharing a social media password […] should not result in criminal penalties,” she added.
Terms of service
A district judge made a similar argument in 2009 in throwing out the case against Lori Drew, who had been convicted of cyberbullying in connection to the suicide of 13-year-old Megan Meier.
“It basically leaves it up to a website owner to determine what is a crime, and therefore it criminalizes what would be a breach of contract,” said Judge George Wu in dismissing the government’s argument that Drew had violated Myspace’s terms of service in setting up a fake account.
The EFF has also decried the law’s “disproportionately harsh penalty scheme”, with even first-time offenses attracting prison terms of up to five years. Repeat offences can be punished by up to 10 years imprisonment.
In the worst-case scenario, CFAA violations can result in a life sentence.
The only notable case in which prosecutors have convicted a security researcher under the CFAA was the very first prosecution, for the havoc unleashed in 1988 by the Morris Worm.
None of the three security researchers The Daily Swig spoke to had encountered any notable legal issues during their legitimate work.
The ‘unauthorized access’ grey area
Dan Tentler, founder of computer security outfit Phobos Group, says his firm’s paperwork “doesn’t leave any wiggle room in terms of legality” when it comes to probing clients' systems and networks.
Similarly, Ryan Barnes, principal security engineer at the Crypsis Group, says “a few good habits” have helped him navigate the “grey area in penetration testing”. These include clearly defining the scope of engagement, “staying strictly within that scope”, and clear communication with clients “if there is ever any doubt” that he might stray “outside of the agreed-upon actions”.
However, he says the risk remains of clients misinterpreting “security testing actions they don’t understand as unlawful. For example, if it is agreed that a penetration test will be conducted ‘to test if a vulnerability is exploitable’, that statement alone is vague and has the potential to be misinterpreted.”
For instance, in certain scenarios the “best or only way” to prove the client is vulnerable after exploiting a vulnerability is to “exfiltrate a sample of data stored on the target. This is an area of risk,” adds Barnes.
“The best way to avoid these situations is to ensure clarity of scope at the outset and provide ongoing communication of actions.”
Casey Ellis, chairman, founder, and CTO of Bugcrowd, says the CFAA predates “the concept of a ‘digital locksmith’” and “criminalize[s] hacking done in good faith, which discourages white hats from finding vulnerabilities before the black hats.”
Abusing the CFAA’s ambiguity
Three in five (60%) ethical hackers have told Bugcrowd that they sometimes don’t report vulnerabilities due to the fear of prosecution.
However, Dan Tentler identifies another chilling effect on legitimate research.
“The only times the CFAA ever comes into play is when someone attempts to publish work, or notify a company that they have some kind of security gap or vulnerability.
But, he adds: “I’ve largely stopped posting security research because the security community has decided that it is unable to police itself, and is running wild with plagiarism and theft.”
For Tentler, even a narrow interpretation of the CFAA, or minor reforms like Aaron’s Law – which has twice stalled in Congress – won’t redeem the legislation.
“It was written close to 40 years ago, long before the internet existed as it exists today,” he explains, and “lawmakers, businesspeople, [and] huge corporations” abuse its ambiguity, he said.
The CFAA, says Tentler, must be “scrapped” and lawmakers must “start from scratch” in consultation with the business and security communities.
In the meantime, says Ellis, businesses and bug bounty platforms must “provide an obvious and easy way for external parties to report these vulnerabilities without fear of legal prosecution.”
If demand for crowdsourced security is any barometer – it “has increased astronomically over the last couple years”, says Ellis – then Corporate America is heeding this advice.
“Crowdsourced security is no longer just for the early adopter tech giants, [it’s] for organizations of any size and level of security maturity,” he adds. “Verticals including financial services, retail, and healthcare showed the biggest upticks.”
The global pen testing market is also burgeoning.
But demand isn’t the only important variable, says Tentler.
“Businesses have started treating bug bounty like pentests,” he argues. “They spin up a bug bounty, and then say everything that was turned in was out of scope, and never pay out any rewards at all.
“It saves them from having to deal with lots of legal implications and scrutiny” while continuing “to ignore security outright”.
Safe harbor required
Jonathan Leitschuh, a software engineer at Gradle and security researcher, expressed frustration at Zoom last year for failing to handle vulnerabilities he discovered, or fix the problem “in a timely manner”.
Leitschuh, who has also recently found a CRLF injection bug in Micronaut, believes users are regularly short-changed by a lack of transparency from companies.
Safe harbor policies are often “predicated on security researchers staying quiet about their vulnerabilities”, he told The Daily Swig, adding that this can result in bugs not receiving a CVE or being disclosed to customers.
YOU MIGHT ALSO LIKE FIRST updates guidelines for multi-party vulnerability disclosure
Whether it’s the wording of anti-hacking laws or of public disclosure policies, ambiguity, along with a lack of transparency, seems to be the biggest deterrent to good-faith research.
Ellis says Safe Harbor initiatives like Disclose.io help “organizations feel safe” from “extortion or reputational damage”, while security researchers can avoid “facing legal repercussions”.
And running invite-only programs gives customers “time to clear that trust hurdle” before subjecting their applications to testing by the wider ethical hacking community.
Whatever progress is made on attitudes to white hat activities, if lawmakers – like wider society – are at least seeing beyond the ‘hacker in a hoodie’ archetype, it can only help reform efforts.