South Park Alexa ‘hack’ shows scary IoT future

South Park Alexa ‘hack’ shows scary IoT future

Saturday, 16 September, 2017

Digital assistants like Amazon’s Alexa are finding their way into more and more homes but as their popularity increases, so do the potential dangers.

While the voice-activated obelisks may be harmless on their own they are vulnerable both to pranks and malicious hacks.

Alexa trolled by Cartman

It’s not only hackers who can hijack your virtual assistant, as viewers of South Park recently found out.

Their Amazon devices added some dubious additions to their shopping lists after the season 21 premiere.

The episode featured a storyline about the uncouth youths squeezing much mirth from an Amazon Echo.

Virtual Assistants are listening to everything, even when you least expect it 1

Cartman and co. asked Alexa to repeat ever more obscene and scatological phrases, setting Twitter alight with complaints as real-life Alexas promptly got to work, following the commands of the animated troublemakers.

After watching the episode, some viewers found items such as “big hairy b*lls” – and a number of other items unfit for publication – added to their Amazon shopping lists.

This is not what the Silicon Valley had in mind when they developed the virtual assistant. Perhaps humour recognition software could be the next big thing.

How the ‘hack’ was done

Devices such as Amazon Echo, Google Home and Microsoft’s Siri are linked to everyday household items including phones, computers, even fridges and thermostats.

Typically these devices require an activation phrase before they respond to requests.

Phrases such as “Okay, Google” or “Alexa…” will wake the device and cause its virtual ears to prick up before following any orders.

‘Dolphin’ attacks

South Park Alexa 'hack' shows scary IoT future

Researchers have now found ways to compromise the likes of Alexa and Siri with non-verbal commands known as ‘dog whistle’ or ‘dolphin’ attacks.

Spoken commands were converted into ultrasonic frequencies, which cannot be heard by human ears, then aimed at the target devices.

This can make devices behave in worrying ways such as installing malware on your iPhone or giving instructions to your car’s navigation system.

Researchers were able to activate Apple and Android devices from a distance of several feet, leaving users unaware of the dangers until it was too late.

The findings were described as a “wake-up call to reconsider what functionality and levels of human interaction [should be] supported in voice controllable systems.”

Attacks like this work because the in-built microphones mistake the ultrasonic frequencies for human speech, according to similar research carried out in the US.

Internet of Insecure Things 

So-called Internet of Things devices are notoriously vulnerable to hacking.

In August 2017 the Chief Constable of Durham Police suggested a security rating for WiFi-enabled smart devices, along the lines of energy efficiency ratings for consumers.

A spokesperson for Google said “we take user privacy and security very seriously…and we’re reviewing the claims.”

Aran Burton author picture


Aran is a technology journalist with an interest in consumer issues.