Researchers at the Samsung AI Center in Moscow and the Moscow-based Skolkovo Institute of Science and Technology explained the feat in an article published this week at arXiv, an academic online pre-print service. They said they could animate one or more photos of people by first training an AI system on a set of videos that many celebrities are involved in to learn important points of view. After that, the AI system could combine this familiarity with one or more images of a person to create a convincing video in the "talking head" style of them.
A video the researchers posted on YouTube this week showed several examples of how compelling it can look and how much work remains to be done. Impressively animated versions of the physicist Albert Einstein, the actress Marilyn Monroe and the surrealist painter Salvador Dali were generated from iconic images of them.
But everyone lacked something: Einstein's voluminous hairstyle did not quite move his head, Dali's matchstick-thin mustache was cut short and Monroe's famous mole was missing on her cheek.
The work is similar to deepfakes ̵
1; a combination of the terms "deep learning" and "fake" that convince fake videos and audio files created with cutting-edge and relatively accessible AI technology. The research
uses the same AI technique as deepfakes, a machine learning method called GANs or generative contradictory networks. But it is something else, as deepfakes are produced by using a target's video along with someone else's video that behaves as the target in the video will, as actor and comedian Jordan Peele points out Obama's mouth gives words to former President Barack.
Dissemination of doctoral videos gives rise to concern for everyone, from political leaders to the US intelligence community, and fears that they could be used to mislead voters. These videos do not need to be changed with the latest technology to be effective: a manipulated video by House Speaker Nancy Pelosi, which became viral this week, has only been slowed down to appear as if it would be after a meeting with President Donald Trump disguise her words.
The work of the researchers is still in its infancy AI system was only trained to shape a person's head, neck, and a few shoulders. And while a clip made with a single women's reference photo looked plausible (albeit in low resolution), other clips made with eight and 32 images of it looked increasingly realistic.
Siwei Lyu Studying and Being Deepfakes The director of the Computer Vision and Machine Learning Laboratory at the University of Albany, SUNY told CNN Business that research could make it easier to create deepfakes with less data than you currently need. Today, this usually equals more than 30 seconds of video of the person you want to manipulate and another person who also needs to be filmed with the movements you want.
"The disadvantage is, without sufficient data, the quality of the synthesis is limited," he said.
That is, he also noticed Monroe's missing mole.