قالب وردپرس درنا توس
Home / Technology / The terrible truth about Amazon Alexa and privacy

The terrible truth about Amazon Alexa and privacy

Illustration: Gizmodo / Amazon

This week I read a story of everything I said to Alexa, and it felt like you were an old read diary. Until I remembered that the things I told Alexa privately are stored on an Amazon server and might have been read by an Amazon employee. That's all it takes to make Alexa better, the company says. However, for many people, it is not immediately obvious that people who interact with their seemingly private voice commands are different from surveillance. Alexa, these people say, is a spy hiding in a listening device.

The debate over whether Alexa or any language assistant is spying on us is years old at this time and does not go away. Data protection lawyers have filed a complaint with the Federal Trade Commission (FTC) alleging that these devices violate the Federal Wiretap Act. Journalists have examined the dangers of always-on microphones and artificially intelligent speech assistants. Skeptical tech bloggers like myself have argued that these things are more powerful than the people who were overloaded with data breaches. Recent news on Amazon staff's review of certain Alexa commands suggests that the situation is worse than we thought.

It feels like Alexa and other voice assistants are spying on us because the system works that way. These systems rely on machine learning and artificial intelligence to improve over time. The new technology that underpins them is currently prone to error, and even if they were perfect, the data-hungry companies that built them are constantly thinking of new ways to bring users profit. And where incomplete technology and powerful corporations collide, the government often struggles so much with understanding what's going on, that regulation seems an impossible solution.

The situation is not completely bad. This technology could be really cool if we pay more attention to the event. Which is pretty complicated.

Endless Errors

A fundamental problem with Alexa or other language assistants is that technology tends to fail. Devices like the Echo are equipped with permanently active microphones, which should only record when they want to hear. While some devices may require pressing a physical button to stop Alexa, many are designed to record you after you say the wake-up word. Anyone who has worked with Alexa for a long time knows that it does not always work that way. Sometimes the software hears a random noise, says it is the wake-up word and starts recording.

The moment I read my story of the Alexa Commands on the Amazon website, it became clear to what extent false positives are a problem. Most entries are boring: "Hey Alexa". "Show me an omelette recipe." "What's happening?". But among the everyday dribbles was a terrifying series of messages that said, "Text not available – audio was not intended for Alexa." Every time I saw it, I saw it twice and read it aloud in my head: "Audio was not intended for Alexa." These are the things that Alexa has heard that it should not have heard, commands that were sent to the servers sent by Amazon and sent back because the machine had decided that the wake-up word was not Alexa recorded audio when the user did not issue a command, in other words, it's a bug.

Alexa really had trouble last August.
Unavoidable Defective Technology The very sophisticated computer program that can understand everything you say hides behind a very simple program that has been trained to create an au fweckwort, and then send all subsequent commands to the smarter computer. The problem is that the simple computer often does not work properly and people do not always know that a recording device is in the room. So we get echo-based nightmares like the Oregon couple who accidentally sent a recording of an entire conversation to a friend. Amazon itself has been working on improvements to reduce the error rate with wizards, but it's hard to imagine that the system will ever be flawless.

"That's the scary thing: There's a microphone in your house, and you do not have final control over when it gets activated," Dr. Jeremy Gillula, Director of Technical Projects at the Electronic Frontier Foundation (EFF). "From my perspective, this is problematic from the perspective of privacy."

This type of incident is bad luck, though it is more common than most people would like. Perhaps worse than disruption, the deliberate behind-the-scenes workflow reveals users' interaction with foreign language assistants. Bloomberg recently reported that a team of Amazon employees have access to the geographic coordinates of Alexa users, and that data has been collected to enhance the language assistant's capabilities. This revelation came just a few weeks after Bloomberg also reported that thousands of Amazon employees around the world are analyzing users' Alexa commands to train the software. You can overhear compromising situations, and in some cases, Amazon employees make fun of what people say.

Amazon has withdrawn hard against these reports. A company spokesman told me that Amazon only commented on "an extremely small number of random customer interactions" to improve the customer experience. These records are stored in a proprietary system that uses multi-factor authentication so that "a limited amount of data can be maintained." Bloomberg suggests that the team counts thousands.

But Alexa and other artificially intelligent language assistants require some human validation, which could prevent future mistakes and improve functionality, and Amazon is not the only company that uses people to check voice commands. Google and Apple also employ teams of employees. to check what users say to their voice assistants, to train the software, to better understand people, and to develop new functions, sure, the human element of these seemingly computer-based services is scary, but it is also an integral part of the developmentof these technologies.

"In the end, for really difficult cases, you need someone to tell you what's going on," Dr. Alex Rudnicky, a computer scientist at Carnegie Mellon University, in an interview. Rudnicky has been developing speech recognition software since the 1980s, and led teams in the Alexa Prize, an Amazon-sponsored artificial intelligence competition. While claiming that humans are needed to improve the processing of natural language, Rudnicky also believes that it is unlikely that a voice command can be traced back to a person.

"Once you're one in ten million," said Rudnicky. "It's pretty hard to argue that someone will find it and trace it back to you and find out things about you that you do not want them to know about."

This idea does not come to mind to a stranger reading yours Daily thoughts or knowing that your location history feels less scary. It may be unusual for a voice assistant to pick me up accidentally, but the systems still do not seem smart enough to wake up with 100 percent accuracy. The fact that Amazon catalogs and makes available all of Alexa's recordings – accidentally or otherwise – makes me feel awful.

The privacy issue that nobody wants to fix

In recent conversations, half a dozen technology and privacy experts told me that we need stricter privacy laws to solve some of these issues with Alexa. The amount of your personal information echoing is subject to Amazon's terms and conditions, and the United States does not have strict federal privacy laws, such as the European Privacy Policy (GDPR). In other words, the companies that build language assistants more or less make the rules.

So I return to some questions. Who cares about the users? Why can not I choose Amazon to record my commands instead of going through privacy settings and looking for ways to stop sending my data to Amazon? And why are my refusal options limited?

In Alexa privacy settings, you can prevent Amazon from using Amazon to develop new features and transcriptions with your recordings. You can not refuse that Amazon may keep your recordings for other purposes.

Although the toggle settings are disabled, Amazon still retains my Alexa recordings. 19659012] Screenshot: Gizmodo

With such settings, the user must protect their own privacy. If so, why can not these companies completely anonymize my interactions with voice assistants?

Apple seems to try this. When you talk to Siri, these commands are encrypted before they are sent to the company with a random Siri identifier. Your Siri identifier is not linked to your Apple ID. Therefore, you can not open your privacy settings on an iPhone and see what you said to Siri. Not all Siri features require your device to send information to Apple servers. Apple uses records of Siri commands to train the software, as you must artificially train intelligent software to improve it. The fact that Apple does not associate certain commands with a particular user may explain why so many people consider Siri terrible. On the other hand, Siri may be the best choice when it comes to privacy in a voice assistant.

This is the point in the debate where Tim Cook wants to remind you that Apple is not a data company. Companies like Google and Amazon turn your personal information into products that they can sell to advertisers or sell more goods to you, he says. This is the same argument we saw from the Apple CEO when he wrote a magazine column earlier this year and announced plans to push federal data protection laws forward.

The idea is slowly gaining ground. In January, the Government Accountability Office issued a report calling on Congress to pass comprehensive privacy laws on the Internet. This report was joined by a chorus of data protection officers who have long argued that the United States needs its own version of the DS-BER. In March, the Senate Judiciary Committee heard testimony from several people who support federal data protection laws. However, it is far from clear whether Congress will respond to this idea.

"Speech technology has become so good that it's important to be concerned about privacy," said Drs. Mari Ostendorf, Professor of Electrical Engineering and Speech Technology at the University of Washington. "And I think companies are probably more worried than the US government."

One might hope that Amazon at least overhauls its approach to privacy and language assistants. At the moment, it seems that the general public is only discovering the many ways in which devices like the Echo record our lives without our permission or share our personal information with strangers. The recent controversy over Alexa just scratched the surface that a world of microphones is always a nightmare of privacy.

The problem is that companies like these with data-driven business models have every incentive to collect as much information as possible about their users. For example, every time you use Alexa, Amazon gets a sharper view of your interests and behavior. When I asked for information about using this data through Amazon, the company gave me a strange example:

"When a customer uses Alexa to make a purchase or interact with other Amazon services like Amazon Music," said one Amazon spokesman "We can take advantage of the fact that the customer has taken this action in the same way as we would if the customer were to do this through our website or one of our apps – for example, to make product recommendations."

There is evidence of these kinds of recommendations could be more sophisticated in the future. Amazon has a patented technology that can interpret your emotions based on the tone and volume of your voice. According to the patent, this hypothetical version of an Alexa-like technology may determine whether you are happy or sad, and provide "highly targeted audio content such as audio advertising or promotions." One could argue that the only thing that holds back Amazon The release of an ad-supported Alexa is the potential of the echo owner's blowback. The government will probably not stop it.

The Terrifying Future

A future without more oversights could shake Philip K. Dickian very quickly. I recently met Dr. Norman Sadeh, a professor of computer science at Carnegie Mellon, who gives a bleak picture of what a future without better privacy could look like.

"At the end of the day, all these speakers will be combined into a single unit," said Sadeh. "So Amazon could use speech recognition to identify you, and as a result, there could be extremely extensive profiles of who you are, what you do, what your habits are, any other attributes that you do not necessarily want to disclose to them , "

He suggests Amazon could make a deal of it by knowing who you are and what you like by the mere sound of your voice. Unlike most dystopian ideas, which could allow facial recognition, speech recognition might work without you ever seeing it. It could work over phone lines. In a future where microphones connected to the Internet are present in an ever-increasing number of rooms, such a system could always listen. Some of the researchers I talked to took up this dystopian idea and complained about their imminent arrival.

Such a system is hypothetical so far, but if you think about it, all the parts are there. Tens of millions of devices with constantly active microphones are available throughout the country, both in private homes and in public places. You are allowed to listen and record what we say at certain times. These artificially intelligent machines are also prone to error and only get better if they listen to us more, and sometimes let people correct their behavior. Who knows, without any governmental oversight, how the system will evolve from here.

We wanted a better future than this, right? Talking to the computer in the '90s was a really cool thing and was definitely an integral part of the Jetson lifestyle. But so far, it seems to be an unavoidable truth that Alexa and other assistants have to spy on us, whether we like it or not. In a way, the technology is designed so that it can not be avoided, and in the future it will probably be worse off by accident.

Perhaps stupid to believe, Amazon and the other companies that make voice assistants are actually worried about privacy. Perhaps they are working to correct the problems caused by error-prone technology, and perhaps they are also working on the fear that people feel when they see devices like the echo record them, sometimes without the users noticing. Heck, maybe Congress is working on laws that would hold these companies accountable.

The future of speech-driven computers does not necessarily have to be so dystopian. Talking to our gadgets would change the way we interact with technology in the most profound way if everyone were there. At the moment this does not seem to be the case. And ironically, the fewer people are involved in developing technology like Alexa, the worse Alexa will be.

Source link