The biggest threat is a consumer mindlessly downloading

ANALYSIS Healthcare is increasingly going mobile, as hospitals and medical practitioners look to reduce waiting room times by harnessing the benefits of treatment on the go. But patients are often placing too much trust in these apps, which can often expose them to fresh security and privacy risks. 

The rapid growth of mobile healthcare app market was borne more out of necessity than any medical advancement, in the view of Adam Piper, a software developer working in the UK.

“If I want to get a doctor’s appointment, it has to be today, and by 8.01am all the appointments are gone,” Piper told The Daily Swig.

“A mobile app is the non-insane version of booking an appointment.”

Thousands of apps, doctor prescribed or not, are now available for download on Google Play or the Apple App Store, where convenience has typically outweighed security in the rush to treat as many people as possible.

When navigating the iOS and Android app ecosystems, most of the risk is placed firmly on the side of the consumer, who becomes liable to having their healthcare data siphoned as soon as an application is downloaded.

“More health data is going through mobile applications today than it ever has, and it’s continuing to increase,” Rusty Carter, vice president of product management at Arxan Technologies, told The Daily Swig.

“The ability to access that information is also very easy.”

Self-care apps that collect and share user data without adequate transparency have been thrown into the spotlight with the rise of mobile healthcare services.


Check out the latest mobile security news and breaches


A study of 36 of these apps, published last April, found that 81% were disclosing consumer information to Google and Facebook. Only half of these apps (43% Google, 50% Facebook) were transparent about their data-sharing practices.

When met with a pressing healthcare issue, a consumer is unlikely to take into consideration the future implications that may arise with connected infrastructure – in this case, the app itself and the multiple entities that it could be interacting with.

“If it [the data] was a Unix terminal in a clinical study, it’s much harder [for an attacker] to gain access to,” Carter said.

“Whereas I can download a mobile application and get to work [as an attacker] in figuring out the APIs, figuring out how the application works, and figuring out which vulnerabilities I can capitalize on.”

Security-first software development

Encryption of the data, both stored and in transit; ensuring that a device isn’t jailbroken, which could open it up to vulnerabilities; and guaranteeing that the application is functioning normally with proper authentication, are some of the issues that don’t necessarily enter a developer’s timescale, Carter explained.

“Similarly, through the server, it [the server] needs to know that the application in my mobile device hasn’t been tampered with, that the data itself is uncorrupted, and running in a safe environment,” he said.

According to the latest Verizon Mobile Security report, 25% of healthcare organizations reported a data breach involving a mobile device in 2018, compared to a sector like financial services, which disclosed 42% mobile security incidents.

Security needs to be built in the development process to counteract this, Carter said, but continuous checks are needed, as well.

“Protecting mobile apps is far less costly than alternatives such as specialized hardware, and definitely less expensive than the cost of a data breach or hack,” Carter said.

“The cost of protecting mobile apps in human effort is typically one person a day, or two per platform (iOS, Android), of total effort depending on the number of pen tests and iterative updates and the complexity of the application.”


RELATED Latest NIST project aims to secure ‘telehealth’ ecosystem


But consumers downloading applications onto their devices remains the biggest security threat to mobile, Carter said.

“At this point, you need to look for official apps only for connected medical devices,” Carter said.

“This provides the highest likelihood of a secure application and protection of data.

“Sharing data with any app creates potential risk of exposure, so limiting to the ‘need’ based on use/application is important,” he added.

In January of last year, the UK’s National Health Service (NHS) publicly rolled out an app for booking medical appointments, accessing medical records, and ordering prescriptions.

“We see the app as the digital front door into the NHS, for those who want to use it, and once rolled out we will continue to develop and enhance its offer to patients, making it the must have health app for everyone in England,” said Matthew Swindells, deputy chief executive of NHS England upon making the NHS App publicly available.

The app, which recently open-sourced part of its code to allow developers to create a biometric sign-on, is currently used by 95% of GP surgeries in England.

Using NetGuard, a tool that shows users’ who an app on their device is communicating to, The Daily Swig found that the NHS app made no suspicious connections to servers.


YOU MIGHT ALSO LIKE Healthcare security report: Organizations face ‘uphill battle’ against cybercriminals