قالب وردپرس درنا توس
Home / Technology / Why Google believes that machine learning is its future

Why Google believes that machine learning is its future



  Sundar Pichai, CEO of Google, speaks during the Google I / O Developers Conference on May 7, 2019.
Enlarge / Sundar Pichai, CEO of Google, speaks during Google I / O Developers Conference on May 7, 2019.

David Paul Morris / Bloomberg on Getty Images

One of the most interesting demos on Google's I / O keynote this week included a new release of Google's Voice Assistant later this year will appear. A Google employee asked the Google assistant to call up her photos and show them with animals. She tapped one and said, "Send it to Justin." The photo has been moved to the messaging app.

From then on things became more impressive.

"Hello Google, send an e-mail to Jessica," she said. "Hello Jessica, I just came back from Yellowstone and totally fell in love with her." The phone rewrote her words and put "Hi Jessica" on a separate line.

"Take on the Yellowstone adventures," she said. The assistant understood that "Yellowstone Adventure" should be included in the subject line and not in the body of the message.

Then, without explicit instructions, the woman again dictated the bulk of the message. Finally she said "send it" and Google's assistant did it.

Google also endeavors to expand the understanding of the personal references assistant, the company said. If a user says, "Hey Google, what's the weather like in Mama's house?", Google can determine that "Mama's house" is referring to the user's mother's house, looking up her address, and providing a weather forecast for her city.

Google says the next generation assistant is coming to "new pixel phones" ̵

1; these are the phones that come after the current Pixel 3 line – later this year.

Obviously, there is a big difference between a can demo and a shipping product. We'll have to wait and see if the typical interactions with the new wizard work so well. However, Google seems to be progressing steadily towards the dream of a virtual assistant, who can handle even complex tasks with speech.

Many announcements at I / O looked like this: Not the announcement of important new products. But the use of machine learning techniques to make a series of Google products progressively more complex and helpful. Google also highlighted a number of hidden improvements to its machine learning software that could leverage both Google-built and third-party machine learning techniques.

In particular, Google is making great efforts to relocate machine learning from the cloud to people's mobile devices. This should allow ML applications to be faster, more private and offline.

Google has cited the cost of machine learning.

  A printed circuit board with Google's Tensor processor unit. "Src =" https: / /cdn.arstechnica.net/wp-content/uploads/2019/05/TPU_V3_BOARD_OVERHEAD_WHITE_FORWEBONLY_FIN.max-2200x2200-640x329.png "width =" 640 "height =" 329 "srcset =" https: / /cdn.arstechnica.net/ wp-content / uploads / 2019/05 / TPU_V3_BOARD_OVERHEAD_WHITE_FORWEBONLY_FIN.max-2200x2200-1280x658.png 2x
Enlarge / A printed circuit board with the Tensor processor unit from Google

At the beginning of the current boom in deep learning, machine learning experts will point to a paper published in 2012, known as "AlexNet", according to lead author Alex Krizhevsky. The authors, a trio of researchers from the University of Toronto, participated in the ImageNet contest to classify images into one of a thousand categories.

The ImageNet organizers delivered more than one million labeled sample images to train the networks. AlexNet achieved unprecedented accuracy using a deep neural network with eight trainable layers and 650,000 neurons. They were able to train such a large network with so much data because they discovered how to use consumer-class GPUs designed for large-scale parallel processing.

AlexNet demonstrated the importance of what might be called a three-legged chair of deep learning: better algorithms, more training data and more computing power. Over the last seven years, companies have been working to improve their capabilities across all three fronts, resulting in better and better performance.

Google has led this charge almost from the beginning. Two years after AlexNet won an image recognition contest called ImageNet in 2012, Google entered the competition with an even deeper neural network and won the grand prize. The company has hired dozens of high-level machine learning experts, including the acquisition of deep-learning startup DeepMind in 2014, which keeps the company at the forefront of neural network design.

The company also has unrivaled access to large amounts of data. A 2013 article described how Google uses deep neural networks to detect address numbers in tens of millions of images captured by Google Street View.

Google has also dealt extensively with the hardware. Google announced in 2016 that it had developed a custom chip called the Tensor Processing Unit, which was specifically designed to speed up the processes used by neural networks. In 2006, the situation became urgent in 2013, "Google wrote in 2017." Then we realized we needed to double the number of data centers we run because of the fast-growing demands on neural network computing power. "

Why Google I / O has focused so much on machine learning over the past three years, the company believes that these resources – a small army of machine learning experts, large amounts of data, and proprietary silicon – make it ideal to use the opportunities of machine learning.

This year's Google I / O did not work. "Indeed, there are many important new ML-related product announcements as the company has already incorporated machine learning into many of its key products Android has had voice recognition and the Google Assistant for years, Google Photos has a long history of impressive ML-based search functionality, and last year Google introduced Google Duplex, which makes a reservation for a user with an incredibly realistic human voice through software

Instead, the I / O presentations focused on the machine Learning in two areas: shifting more machine learning activities to smartphones and machine learning to help disadvantaged people – including deaf, illiterate or cancer patients.

Expressing Machine Learning on Smartphones

Justin Sullivan / Getty Images

Earlier efforts to improve the accuracy of neural networks have made them deeper and more complicated. This approach has produced impressive results, but has one major drawback: networks often become too complex to run on smartphones.

Most people have dealt with it by moving calculations to the cloud. Earlier versions of Google and Apple's language assistants recorded audio and uploaded it to the company's servers for processing. That worked fine, but it had three major drawbacks: it had a higher latency, weaker privacy, and the feature only worked offline.

Google has been working to shift more and more computations on the device. Current Android devices already have basic speech recognition capabilities on their device, but the Google Virtual Assistant requires Internet connectivity. According to Google, the situation will change this year with a new offline mode for Google Assistant.

This new feature is an important reason for the lightning-fast response times demonstrated in this week's demo. According to Google, the assistant for certain tasks is "up to ten times faster".


Source link