In the meantime, the research team created Unreal Engine 4, a so-called "semantic map" of a scene that essentially labels each pixel on the screen. Some pixels were thrown into the "car" container, others into the category "trees" or "buildings" – you get it. These pixel clumps also got clearly defined edges, so Unreal Engine created a kind of "sketch" of a scene that was fed into NVIDIA's neural model. From there, the AI used the imagery she knew was a "car" on the "clump of pixels" labeled "car" and repeated the same process for every other classified object in the scene. This may sound boring, but it worked faster than you think ̵
The NVIDIA team also used this new video-video synthesis technique to get a team member to dance digitally like PSY. The production of this model required the same work as the car simulation, only this time the AI had the task to find out the poses of the dancer, to turn them into rudimentary stick figures and make the presentation of another person on it.
For the time being, the results of the company speak for themselves. They are not as rich or as detailed as a typical scene depicted in an AAA video game. NVIDIA's sample videos offer insights into digital cities filled with objects that somehow look real. Emphasis on "style". The humorous body swaps Gangnam Style worked a little better.
Although NVIDIA has exposed all the underlying code, it probably takes a while for the developers to use these tools to work out the next VR sets. That's a good thing, frankly, as the company quickly pointed out the limitations of the neural network: while the virtual cars moving through the simulated cityscape look surprisingly lifelike, NVIDIA says the model is not a good vehicle they turn vehicles because there is not enough information on the label cards. For VR artisans, the worry is that certain objects, like those annoying cars, do not always look the same as the scene progresses. According to NVIDIA, these objects could change their color slightly over time. All of these are clearly representations of real objects, but they are far from photorealistic.
These technical defects are one thing; Unfortunately, it is not difficult to see how these techniques could be used for unsavory purposes. Just look at the deepfakes: it's getting harder and harder to distinguish these artificially created videos from reality. As NVIDIA has proven with the Gangnam Style test, the neural model could be used to create uncomfortable situations for real people. Not surprisingly, Catanzaro may look good.
"People really like virtual experiences," he told Engadget. "They are mostly used for good things, and we focus on good applications." Later, however, he acknowledged that people used such tools to find things he does not approve of, the "nature of technology," and he pointed out that "Stalin's photo-hopping people in the '50s, before Photoshop did Existence "(19659002)] It is undeniable that NVIDIA's research represents a significant advance in digital imaging and over time can help change the way we create and interact with virtual worlds. For the trade, for the art, for innovations and more that is a good thing. Nevertheless, the presence of these tools also means that the line between real events and fabricated events is getting thinner, and pretty soon we really have to gauge what these tools are capable of.