How hackers are manipulating vulnerable bots to steal data, impersonate employees, and carry out phishing scams

Chatbots are increasingly becoming a standard customer service tool for many companies worldwide – a survey by Oracle last year revealed that 80% of marketers in France, the Netherlands, South Africa, and the UK are already using, or plan to use them, by 2020.

However like any technology they are potentially vulnerable to hackers, which can result in dire consequences.

In June, for example, Ticketmaster admitted that it had suffered a security breach caused by a single piece of JavaScript code that had been customised by third-party customer support supplier Inbenta.

It's believed that the names, addresses, email addresses, telephone numbers, payment details, and Ticketmaster login details of as many as 40,000 UK customers were accessed – but it's still not known for sure how the attackers gained access to Inbenta’s systems.

Fundamentally, though, chatbots are vulnerable to the same sorts of attacks as any other technology.

“If an attacker gains access to a network, the data that the chatbot has access to can be compromised,” Randy Abrams, senior security analyst at internet security firm Webroot, tells The Daily Swig.

“It’s like any other data on the network. If I can modify the data, then I can feed a chatbot misinformation for fun and profit.”

However, chatbots can also have vulnerabilities all of their own. In 2016, for example, Microsoft’s Tay bot was manipulated into spouting anti-Semitic and racist abuse.

In this case, the problem was the Microsoft was allowing Tay to learn from the people it interacted with – and those people, it seems, weren’t very nice.

In another example on dating app Tinder, cybercriminals used a chatbot to impersonate a woman who asked victims to enter their payment card information to become verified on the platform.

And this type of attack is where psychological factors come into play. Even when users know they’re dealing with a chatbot rather than a real company representative, they are far more likely to demonstrate blind trust in its requests.

“A commandeered chatbot can be used for phishing. Chatbots essentially act as company representatives,” said Abrams.

“When a representative from a trusted organisation says ‘follow this link and log in...’, it is not suspicious to the victim.

“This can be effective against even very skilled security experts. Most users will have no chance against this attack vector.”

Unreported

According to business risk intelligence firm Flashpoint many attacks are going unreported, meaning that awareness of the potential risks isn’t high.

Standard security measures such as multi-factor authentication and keeping software and security patches up to date can go a long way towards mitigating risks, as can encrypting conversations between the user and the chatbot.

“Companies may also consider breaking messages into smaller bits and encrypting those bits individually rather than the whole message.

“This approach makes offline decryption in the case of a memory leak attack much more difficult for an attacker,” suggest Flashpoint’s Amina Bashir and Mike Mimoso.

“Additionally, appropriately storing and securing the data collected by chatbots is crucial. Companies can encrypt any stored data, and rules can be set in place regarding the length of time the chatbot will store this data.”

As for users, Abrams recommends establishing right from the start whether you're talking to a real person or not.

“If you suspect you are dealing with a chatbot but are not sure, just ask. When it comes back with ‘Can you please rephrase the question?’, you have your answer,” he says.

“Geeky fun when you know it is a chatbot – seriously disappointing when it should have been a human.”