You probably did not expect machine learning here, but here we are anyway.
Google posted "Move Mirror" on July 19th in a blog post. It is a machine learning experiment that compares your poses with pictures of other people in the same pose.
The reason for his existence? Fun, mainly. Google also wanted to "make machine learning more accessible to programmers and decision makers," while encouraging them to leverage the technology and use it for their own applications.
The "mirror" uses an open-souce "pose estimation model" from Google called PoseNet, which can detect body poses, and TensorFlow.js, a library for the machine learning framework in the browser.
In the search for a matching image, the experiment uses your "pose information" ̵
I gave it a real challenge by dancing like a fool and it seemed to toss a young lady in a white dress.
Of course, using computers to detect poses is not new – motion detection technology has been used for decades to capture real human movements for blockbusters. Video games have also used it, just look at the 3D imaging device from Microsoft, the Kinect. But these methods require expensive hardware. The triumph here is that everything happens in the browser, with just a webcam.
Google does not send any of your images to its servers, the entire image recognition happens locally in the browser. The technology also does not recognize who is in the picture because there is "no personally identifiable information related to the pose estimate."
If you're interested in the incredible amount of work that Move Mirror has created, TensorFlow has a comprehensive overview of the challenges and coding hurdles you've overcome in your blog.
You can try it for yourself if you have a webcam on the Google experiment page.
& # 39; Hello, Humans & # 39 ;: Google's duplex could make the wizard the most lifelike AI.
Comic-Con 2018: We're on our way to America's epic entertainment geek festival and bring you all the latest.