by Alessia Manzoni
“Okay Google, browse to evil.com.”
Researchers have managed to hide this message in an audio clip that is detectable by smart devices like Alexa, Siri and Google Assistant, but not by the human ear.
A study conducted at the University of California, Berkeley, has demonstrated that the command can be embedded directly into a voice recording that, to the listener, sounds like “without the dataset the article is useless.”
This technology, called “dolphin attack” because dolphins can hear sounds that humans can’t, exploits the differences between human and machine speech recognition. It has also been studied at Princeton University in the US and Zhejiang University in China.
“We wanted to see if we could make it even more stealthy,” said Nicholas Carlini, Ph.D candidate in computer security and author of the paper.
Voice assistant devices can be used to ask questions, but also to shop online or unlock smart doors. Potentially, this technology could be used with criminal purposes.
“My assumption is that the malicious people already employ people to do what I do,” Carlini said.
Chakri Musajo Somma, software and web developer, thinks that the risk is not bigger than using any other smart device.
“You need a really good computer expert to reproduce this technology. It’s not impossible, but it’s easier to access other people’s data in more ‘traditional’ ways. Smartphones, for example, are weaker spots than voice assistants,” Musajo Somma said.
Musajo Somma recommends approaching technology with a judicious attitude.
“This study clearly shows that dolphin attacks are possible. The best way to protect ourselves is to be sensible. Make sure your passwords are strong and varied, that you have a two-step identification system in place and maybe try to keep Alexa away from external noise sources.”
Global Web Index estimates that more than 300 million people have used voice control in the past month.
Featured Image by Alessia Manzoni