قالب وردپرس درنا توس
Home / Technology / How AI is changing photography

How AI is changing photography



If you're wondering how good your next phone's camera is going to be, it's worth considering. Beyond the hype and bluster, the technology has enabled staggering advances in photography over the past couple of years, and there's no reason to slow that down.

There's a lot of gimmicks around, to be sure. But the most recent recent advancements have taken place at the same time as the sensor or lens is being used.

Google Photos provided a clear demonstration of how to create a powerful AI and photography would have been launched in 2015. Prior to then, the search had been successful in using categorize images in Google+ for years, but the launch of its Photos app included consumer-facing AI features that have been unimaginable to most. Users' disorganized libraries of thousands of untagged photos were transformed into searchable databases overnight.

Suddenly, or so it seemed, Google knew what your cat looked like.


Photo by James Bareham / The Verge

acquisition, DNNresearch, by setting up a deep neural network trained on data that had been labeled by humans. This is called supervised learning; The process involves training the network on the way. Over time, the algorithm gets better and better at recognizing, say, a panda, because it contains the patterns used to correctly identify pandas in the past. It learns where the black and white for a to be in relation to one another, and how it differs from the hide of a Holstein cow, for example.

It takes a lot of time and processing power to train with your mobile device without much trouble. The heavy-lifting work has already been done, so once your photos are uploaded to the cloud, Google can use its model to analyze and label the whole library. About the year after the release of the article, it was announced that it would not be able to do any work on the network.

Intelligent photo management software is one thing, but AI and machine learning are arguably having a bigger impact on images are captured in the first place. Yes, you continue to come to the limits of physics when it comes to cramming optical systems into slim mobile devices. Nevertheless, it is not uncommon these days for phones to take better photos in some situations than a lot of dedicated camera gear, at least before post-processing. That's because there's a CPU, an image processing unit, an image processing unit (NPU).


This is the hardware that is being exploited in the world of computer science , Not all computational photography involves AI, but AI is a major component of it.

Apple makes use of this tech to drive its dual-camera phones' portrait mode. The iPhone's image processor uses the technology to recognize the subject and blur the background. The ability to recognize people through machine learning what's new when this feature debuted in 2016, as it's what photo organization software was already doing.

Google remains the obvious leader in this field, however, with the superb results produced by Pixel as the most compelling evidence. Marc Levoy has The Verge machine learning means the system only gets better with time. Google has been using a camera for photos, as well as the camera with exposure. The Pixel The Verge The Verge Google's Night Sight is A stunning advertisement for the role of software in photography

But Google's advantage has never seemed so strong as it has been a couple of months ago with the launch of Night Sight. The new Pixel feature stitches long exposures and uses a machine learning algorithm to calculate more accurate white balance and colors, with frankly astonishing results. Pixelated Phones – The Original <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> Pixmac | is now more important than camera hardware when it comes to mobile photography.


That said, there is still room for hardware to make a difference, especially when it's backed by AI. Honor's new View 20 phone, along with parent company Huawei's Nova 4, are the first to use the Sony IMX586 image sensor. It's a larger sensor than most competitors and, at 48 megapixels, represents the highest resolution yet seen on any phone. But that still means cramming a lot of tiny pixels into a tiny space, which tends to be problematic for image quality. In my View 20 tests, however, Honor's "AI Ultra Clarity" mode excels at making the most of the resolution, descrambling the sensor's unusual color filter to unlock extra detail.

NPUs want to take on a larger role as computational photography advances. Huawei's the first company to announce a system-on-chip with dedicated AI hardware, the Kirin 970, although Apple's A11 Bionics ended up reaching consumers first. Qualcomm, the largest supplier of Android devices worldwide, has not made it to a major focus yet, but has developed its own chip called the Pixel Visual Core to help with AI-related imaging tasks. Apple's machine learning framework, up to nine times faster than the A11, and the first time it's directly linked to the image processor. Apple says this is a better understanding of the focal plane.


Source link