If we’re talking about the new iPhone Pro, artificial intelligence is part of the conversation.
Cameras are a key feature of modern smartphones1 and with the unveiling of three new iPhone models yesterday, Apple made it very clear that this year’s updates were focused primarily on photography. In addition to showing off new camera hardware (new lenses and a third lens on the Pro models), software enhancements received quite a bit of stage time.
As expected, artificial intelligence (AI) is a key player in these enhancements.
Hardware Enables AI Software
Modern AI software is enabled by modern hardware. One of the key reasons we can do things now with photography AI that weren’t possible several years ago is that the computing hardware wasn’t powerful enough to make those operations practical for us on our smartphones, tablets, and computers.
Traditional CPUs aren’t great for AI work, which is why iPhones (and other high-end smartphones) also include a Neural Processing Unit (NPU) to perform these intense computational tasks that enable many of the AI features we know and love.
Apple has announced new generations of its homegrown A-series chips annually with the new iPhones and this year was no exception. The new A13 Bionic chip has multiple cores that are 20% faster than the previous generation, and various aspects of the chip require 30% less power and operate 40% more efficiently. All of this boils down to powerful computing inside the phone, which enables new photography-related software features to become possible.
Refinement, Not Revolution
For the past few years we’ve seen refinement in the iPhone photography possibilities. This trend continues. Some might wonder if these are compelling reasons to upgrade to a new phone, but if you’re serious about making great images with your iPhone, I don’t think you should hesitate. Last year’s “Smart HDR” was a nice leap forward for the average iPhone photograph, and this year we see additional upgrades.
Last year Google garnered quite a bit of (well earned) attention for their “Night Sight” feature with their Pixel phones. Apple is now introducing “Night Mode” which is their take on this feature, which allows for “seeing in the dark” style images such as this sample provided by Apple:
Unlike Android, where Night Sight is a manually-chosen camera mode, Night Mode on the iPhone will be an automatic feature that will enable itself when the scene would benefit from it being used. But like Android, this feature is made possible through advancements in multiple image composition to create a composite image from many captures, piecing together the various bits to result in a pleasing final image.
This leads nicely into something new that Apple’s Senior Marketing VP Phil Schiller called “computational photography mad science”: Deep Fusion.
Deep Fusion? What does it mean? A lot is unknown at this point, and it is only a “sneak peek” at technology scheduled to be released later this fall. When introduced onstage, it was noted that Deep Fusion would be a very advanced from of image composition similar to HDR, and that it would involve a set of nine source images (a mixture of long and shorter exposures) that bracket the image captured as the shutter button is pressed. These images are then combined to create a crisp image with minimal image noise.
You’ll see that the theme with many of these new features is not simply recording the image that comes from the iPhone’s image sensor, but rather involves significant in-camera image enhancement that’s made possible (and made possible at a very quick speed… you won’t really know that it’s happening) by the advanced hardware we now find in these devices. iPhone Pro artificial intelligence won’t always be in your face, but it’ll always be working to refine your face in those selfies.
There’s quite a bit of information about smartphone AI technology in my upcoming book on AI & photography; if this is a topic of interested you ought to hit that link so you can stay in the loop as it reaches publication.