New Tech Can Create Photorealistic Scenes From Basic Doodles in Seconds

In another giant step forward for AI technologies, NVIDIA’s new software GauGAN can create lifelike photographs of scenery from basic doodles. The company, who also released tech capable of creating portraits of people that didn’t actually exist, have unveiled the first look at what this AI is capable of.

At NVIDIA’s annual GPU Tech Conference, the firm unveiled what they call a “deep learning model.” This model can effectively produce photorealistic composites based on a user’s sketch.

The team used generative adversarial networks (which is where the “GAN” in GauGAN’s name stems from). A generator network “creates images that it presents to a discriminator network, which has learned from real images what real-world objects look like, converting segmentation maps and guiding the generator with pixel per pixel feedback.”

Writing on their blog, NVIDIA Research claim this could be a great asset for architects and landscapers, highlighting that sketches are much easier to work with when brainstorming, but that being able to convert such drawings into highly realistic representations of the finished project would be a great asset.

So advanced is the technology that it can allegedly adapt to real-world nature. Should one decide to draw a pond, nearby elements such as trees and rocks will appear as reflections in the water. By studying images containing water, it has learned that water creates reflections. Swap a segment label from “grass” to “snow,” and the entire image changes to reflect winter conditions.

What does this mean for the future of digital photography?

Leave a Reply

Your email address will not be published. Required fields are marked *