قالب وردپرس درنا توس
Home / Technology / Nvidia AI turns sketches into photorealistic landscapes within seconds – TechCrunch

Nvidia AI turns sketches into photorealistic landscapes within seconds – TechCrunch



Today, at Nvidia GTC 2019, the company has unveiled an impressive image creator. With generative enemy networks, users of the software can sketch almost photorealistic images with just a few clicks. The software turns a few lines into a stunning sunset on a mountain top. This is MS Paint for the AI ​​age.

GauGAN, the software is just a demonstration of what's possible with Nvidia's neural network platforms. It was developed to create a picture that a human would paint with the goal of creating a sketch and creating a photo-realistic photo in seconds. In an early demo it seems to work as advertised.

GauGAN has three tools: a paint bucket, a pen and a pencil. At the bottom of the screen is a series of objects. Select the cloud object and draw a line with the pen, and the software will create a touch of photorealistic clouds. However, these are not picture stamps. GauGAN provides unique results for input. Draw a circle and fill it with the paint bucket, and the software generates bulky summer clouds.

Users can use the input tools to draw the shape of a tree and create a tree. When you draw a straight line, a smooth stem is created. Draw up a pear and the software fills it with leaves that make up a whole tree.

GauGAN is also multimodal. If two users create the same sketch with the same settings, random numbers built into the project will ensure that the software produces different results.

To get real-time results, GauGAN must run on a tensor computer platform. Nvidia demonstrated this software on an RDX Titan GPU platform that delivered real-time results. The operator of the demo was able to draw a line and the software produced results immediately. Bryan Catanzaro, vice president of Applied Deep Learning Research, said with some modifications, GauGAN can run on almost any platform, including CPUs, though the display can take a few seconds.

In the demo, the boundaries between objects are not perfect and the team behind the project says it will improve. There is a light line where two objects touch. Nvidia describes the results as photorealistic, but is not under control. Neural networks currently have a problem with objects on which they are trained and on what the neural network is trained to do. This project hopes to reduce this gap.

Nvidia made one million images on Flickr to train the neural network. Most came from Creative Commons from Flickr, and Catanzaro said the company only uses images with permissions. The company says this program can synthesize hundreds of thousands of objects and their relationship to other objects in the real world. In GauGAN change the season and the leaves disappear from the branches. Or when a pond is in front of a tree, the tree is reflected in the water.

Nvidia will release the white paper today. Catanzaro noted that it had previously been accepted for CVPR 2019.

Catanzaro hopes this software will be available on Nvidia's new AI Playground, but says the company needs to do some work to make it possible. He sees such tools used in video games to create even more immersive environments, but he notes that Nvidia does not create direct software for them.

It is easy to complain about how easily this software can be used to create spurious images for nefarious purposes. And Catanzaro agrees that this is an important issue as it is bigger than a project and a company. "This is very important to us because we want to make the world a better place," he said, adding that this is a question of trust rather than a technology issue and that we as a society need to deal with it. [19659002] Even in this limited demo, it's clear that software based on these capabilities will appeal to everyone, from video game designers to architects and occasional gamers. The company does not intend to publish it commercially, but could soon release a public trial in which the software can be used by anyone.


Source link