Voice control
Hackers could send inaudible commands to hijack speech recognition systems. Kaufdex/Pixabay

Hackers could use inaudible, undetectable commands transmitted via ultrasound in order to compromise speech recognition systems and popular voice assistants, security researchers in China discovered.

The technique, dubbed “DolphinAttack” by researchers from China’s Zhejiang University, could be carried out by an attacker to hijack voice-controlled assistants like Apple’s Siri, Amazon’s Alexa or Google’s Assistant.

DolphinAttack could be carried out by using ultrasonic frequencies that reach pitches above 20,000hz—well beyond the range humans can hear, but still audible to the microphone in a smart device like the Google Home or Amazon Echo and Dot.

STRUCTURE SECURITY -- USE THIS ONE
Newsweek is hosting a Structure Security Event in San Francisco, Sept. 26-27. Newsweek Media Group

By transmitting commands at those undetectable frequencies, the researchers were able to direct a number of devices with built-in microphones and speech recognition to carry out actions with potentially harmful results.

The researchers were able to direct a device to visit a malicious website that could then download a piece of malware or make use of a zero-day exploit to hijack or further take control of the device.

They were also able to make the device initiate outgoing voice and video communications like phone and video calls. Such an attack could be used for surveillance purposes to monitor the activity of a user through the microphone of the manipulated device.

Another command involved sending text messages and emails from the device owner’s accounts. The attack could also be used to publish unauthorized posts online and on social media or add fake events to the user’s calendar, depending on what applications the user has synced up to the voice-guided assistant.

Finally, the researchers were also able to force associated devices linked to the smart speaker to activate airplane mode, effectively disconnecting the user from the internet and other communications.

The hypothetical attacks were carried out against a number of devices with built-in speech recognition and voice assistants, including those manufactured by Amazon, Apple, Google, Microsoft and Huawei.

While such an attack seems concerning, owners of devices with voice-powered assistants should rest easy knowing the researchers were simply showing the proof of concept for a potential attack. It is not something that has been carried out in the wild yet.

In order to even carry out the test attacks using ultrasonic frequencies, the researchers had to place themselves within six feet of the device in order to get the microphone to pick up on the commands.

That isn’t to say a version of the attack couldn’t be carried out or modifications made to the researcher to make such an attack more powerful, but for the time being, there is little to worry about for device owners.

For those who want to take extra precautions to make sure their device isn’t manipulated by undetectable frequencies, simply disabling or changing a device’s wake-up phrase can mitigate the attack.