Let’s talk terminology.
I talk a lot about artificial intelligence and photography (such as in my book) and there are a bunch of terms we hear all the time… but do we even know what they really mean? That’s a great starting point to explore. What is computational photography? Check out this video I’ve published here I explore a bunch of common terms and what they mean to you as a modern photographer:
Speaking of AI, here’s the AI-generated transcript:
Artificial intelligence? Computational photography? What does it even mean?
I’m Aaron from techphotoguy.com and as somebody who’s been deep in the world of artificial intelligence and photography for the last few years (heck I even wrote a book on it) let’s talk a little bit about what the terms even mean as a basic starting point for photographers who want to keep up with current technology and understand where we’re headed.
So let’s start with the broadest term artificial intelligence what is what is that well artificial intelligence at a high level is the concept of a computer or a machine making decisions that we once thought were only possible by humans. And with that definition you might realize that that is going to change over time but as we look at it there’s really three broad fields of artificial intelligence. Two of which don’t really matter for us right now as photographers (I’ll gloss over those very quickly).
The first is artificial super intelligence ASI the idea with that is the idea of a computer that could be even smarter than the world’s brightest minds. This is a computer that is smarter than a human. This is when we have a lot of sci-fi novels that take place where computers take over the world. The second type is artificial general intelligence. A computer that is as smart as a human can make similar decisions. This would be a computer or a robot that could go to a college class and and pass that course without any prior knowledge. Or a computer that could make you coffee in an unfamiliar environment or pass the turing test for example. But let’s talk about artificial intelligence as it comes to photography today. Our cameras, our software, how we work that’s the field of artificial narrow intelligence ANI and the idea here is a computer or a system, an algorithm, that can be as good as a human (maybe smarter) for a narrow field for a specific purpose. Photography is not the only place that we see this. Self-driving cars are probably another great example that we’re all familiar with. In that case the computer needs to be as smart or better than a human driver. When it comes to photography we can look at artificial intelligence helping us make decisions in the camera as well or better than a human can and so that’s the field of artificial intelligence that we’re looking at. Artificial narrow intelligence for a specific purpose.
Well how does the computer get smart, right? How does the computer learn? And this is where the term machine learning comes into play. Machine learning is how AI learns. It’s where a computer with algorithms can develop skills to make decisions for situations that hasn’t explicitly been trained on. Instead of having a computer programmer that writes the answers to every possible thing this computer would ever need to know, the software can learn on its own based on feedback that it gets from decisions it’s made previously. So how does a machine learn well it learns through training data and if we look at a photography application maybe we’re looking at face detection or maybe we’re looking at object detection we’re trying to understand is this a photo of a cat or of a dog is this a photo of an apple or an orange well the way that these algorithms can work is by being fed large quantities of training data. Thousands or tens of thousands or millions of photographs to help them refine their decisions that they can make about those images. And one of the things about training data that’s really important when we talk about artificial intelligence you may have heard the phrase garbage in garbage out it has to do with the results that you’re going to get from a computer system only being as good as the quality of the data that went into it. One of the areas where artificial intelligence in the photographic world has had some issues is that by and large most like facial detection algorithms have been trained on primarily caucasian data sets and so they’re not as effective when it comes to darker skin populations. This is an area where the algorithms aren’t as good for certain populations and things need to catch up because if we’re going to apply AI for various purposes you know we need to make sure that we’re doing that equitably. A little diversion there to talk about that but that’s a good example of how important the training data is and so the more training data that an algorithm has the more accurate it’s going to be when it comes to those AI based decisions whether that’s facial detection autofocus tracking eyes in a picture for auto focus things like that. The more data that it has the better decisions it can make.
How does it make that decision this is where we get to the term neural network. So a neural network in the analog world like in the human brain is a case where we have all these neurons that work together massive quantities of them that all make little decisions one at a time but they synchronize all those decisions together to come to an answer. That’s a that’s an analog a wetware neural network. We also hear about neural networks in the artificial intelligence world and this is the computational power the processing power that can make AI decisions. In your computer or in your smartphone or in your you know camera and when it comes to those neural decisions when it comes to that neural network in your computer you know we’ve probably all heard of a CP a central processing unit that’s been in our computers for decades. Well it turns out that a traditional CP isn’t real great at artificial intelligence processing. It uses a lot of energy. It’s not real efficient this is why companies like Apple have started putting NPUs (neural processing units) into their iPhones. It’s a chip specifically designed to handle the machine learning and to handle the artificial intelligence computations on your smartphone more efficiently and using less battery power than a traditional processor could.
Let’s bring it all together and wrap it up. We talked about AI. We talked about types of AI. We talked about how AI learns with machine learning. We talked about neural networks kind of how it operates in the decisions that it makes. Really where it all comes together is in the term computational photography. Computational photography simply means bringing all of this together to the point where a computer in our smartphone in our camera is making those decisions to help us capture an image using that computer brain to compensate or overcompensate or make up for a lack of analog information at times or to supplement the analog information that we have. This is how your smartphone with a tiny image sensor and a tiny lens can make images that look as good as ones that come from bigger cameras from a few years ago. Even though the physics aren’t there to support that level of image capture the software the AI the algorithms are that good to be able to make it look great. So computational photography: don’t get weirded out by it it simply means computer assisted photography. It’s what happens when we use both our brains and the brains of our camera or smartphone to make images and it really is the future of photography.
If you’d like to learn more about this you know as I mentioned I’ve got my book I’ll drop a link in the description down below you can check that out otherwise head over to techphotoguy.com you can subscribe to keep up with what’s going on and as always on YouTube you can subscribe right down below hit that like button turn on notifications and i’ll be back at you with a new and interesting video here again soon take care.