Home / Business / Your Alexa speaker can be hacked with harmful audio tracks. And lasers.

Your Alexa speaker can be hacked with harmful audio tracks. And lasers.



  alexa cortana windows 10 listening

In Stanley Kubrick's 1968 film 2001: A Space Odyssey becomes a self-confident artificial intelligence system called HAL 9000 murderous and attempts to kill its crew during a murder space mission. In 2019 our A.I. Wizards like Apple's Siri, Amazon's Alexa, and Google Assistant still have to fight HAL's willful cruelty against humans. However, this does not mean that they can not be used against us.

Some reports today assume that one-third of American adult households own smart-speaker devices. The smart speakers have expanded their capabilities beyond just selecting music or setting timers: they help us deliver pharmaceutical knowledge to the control of our smart homes.

So what can go wrong? Two recent studies provide examples of how malicious actors (or in this case, researchers who hypothetically pretend to be malicious actors) can exploit the fundamental weaknesses of today's intelligent assistants. The results are not nice.

Okay, so it's not exactly HAL 9000 that goes wrong, but it's a reminder that there's a lot to worry about in terms of smart speaker safety. And how smart assistants can not be so wise in some cases.

Contradictory Attacks on Alexa

The first example concerns so-called conflicting example attacks. You may remember those unusual research attacks that first raised their heads a few years ago with regard to image recognition systems.

The first enemy attacks picked up a strange weakness of image recognition systems that perform image recognition by searching for familiar elements that help them to understand what they are seeing. Using this weakness, lucky researchers have shown that a state-of-the-art image classifier can be tempted to confuse a 3D-printed turtle with a rifle. Another demo showed an image of a lifeboat with a tiny speck of visual noise in a corner classifying the image with almost total confidence as a Scottish Terrier.

Both attacks showed unusual strategies that would not deceive humans for a second, but still had the ability to confuse AI deeply. At Carnegie Mellon University, researchers have now shown that it's possible to use the same feature for audio.

"The majority of models used in commercial speech recognition products [neural network] are the same as those used for image recognition," said Juncheng Li, one of the project's researchers, to Digital Trends. "We were motivated to ask if there are the same weaknesses of these models in the audio industry. We wanted to find out if we could compute a similar style of an opposing example to exploit the weakness of the decision boundary of a language-trained neural network and a watch word model. "

Focusing on the speaker-internal neural network whose only purpose in artificial life is to listen to the watchword in an Amazon echo. Li and his colleagues were able to develop a special audio cue that prevents the activation of Alexa. When this particular music cue is played, Alexa can not understand his name. During music playback, Alexa responded to the name in only 1

1% of the time. This is significantly less than 80% of the time when the name is recognized when other songs are played, or 93% of the time when no audio clip is played. Li believes that the same approach applies to other A.I. Wizards too.

If you prevent your Amazon Echo from hearing your voice, this may be just a bit of irritation. However, Li points out that this discovery may contain other, more malicious applications. "What we did [with our initial work] was a denial of service attack, which means we're taking advantage of the wrong negative of the Alexa model," Li said. "We pretend to believe that the positive is actually a Negative is. " However, there is a reverse approach that we are still working on. We try to get Alexa to generate a false positive. That is, if there is no Alexa Wake Word, we want to wake it up by mistake. That could possibly be more malicious.

Frickin Laser Attack

While the Carnegie Mellon researchers focused on mysterious audio cues, another recent project took a different approach to gaining control over your intelligent speaker: lasers. Researchers from Japan and the University of Michigan have shown in a paper co-funded by the US Defense Department (DARPA) that they can hack smart speakers without saying a word (or singing a note) while they have a lead "The idea is that attackers can use a flickering laser to induce intelligent speakers and voice assistants to recognize voice commands," said Benjamin Cyr, a University of Michigan researcher, to Digital Trends. "A microphone usually works to absorb changes in air pressure due to noise. However, we have found that you can use the laser to pick up a microphone when you change the intensity of the light of a laser beam in the same pattern as the air pressure of the sound. The microphone then reacts as if it were "listening". the sound.

To give an example of how this might work, an attacker could record a specific command such as "Okay, Google, turn off the lights." By coding this sound signal onto a laser signal and aiming for it As an intelligent loudspeaker, the device can react as if someone had actually spoken the command. In tests, the researchers showed that they have a variety of A.I. Assistants up to 100 meters away focusing the laser with a telephoto lens. While an attacker would still need a laser near his target Smart speaker, the fact that he could execute the hack from outside a home poses a potential security risk.

"It depends on what you do enable or enable Can you only run the performance with your voice, and what other devices are connected to your smart speaker? said Sara Rampazzi of the University of Michigan. "If an opponent can play music on your behalf by inserting voice commands into your speaker, this is not a threat. On the other hand, we show in our work that in [cases where] it is possible for a tech savvy user who has connected the loudspeaker to many systems to unlock [to] intelligent locks that are connected to a smart speaker to power the engine Launching Cars An app that connects to your phone or buys things online without permission. "

Vulnerabilities to correct

Each device is exposed to attacks, of course. There are malware that allows users to hack other users' computers and smartphones, and that can prove incredibly harmful in their own way. In other words, smart speakers are not alone. And if people were unwilling to give up their speakers when they heard that businesses were listening to a number of user records, they probably would not do so because they are two (admittedly) research projects.

Assisted technology goes nowhere. In the coming years it will only be more widespread – and more useful for its part. By highlighting some of these unrobust security features, the researchers in these two projects have shown that there are many more things that users need to be aware of in terms of possible attacks. In particular, these are vulnerabilities that companies like Amazon and Google have to work hard to eliminate.

"As we drive home automation and new ways of interacting with systems, we need to think about such gaps and clear them out carefully," he told Daniel Genkin, one of the researchers at the AI ​​Wizard Laser Hack Project. "Otherwise, such problems will occur again and again."

Get people to share their secrets with a conversation partner. requires a lot of trust. If technology is ever to realize its enormous potential, it is crucial that users are given every reason to trust them. It is obvious that there is still a long way to go.

Editor's recommendations







Source link