It’s a problem, but not fatal.
When generative image AI tools entered the zeitgeist in 2022, there were three broad reactions. The first was widespread awe of the compelling work produced by tools such as Midjourney, Stable Diffusion, and DALL-E. The second reaction was a fear that these tools could replace humans working in creative fields such as photography, illustration, and animation. The third response came from visual artists, many of whom were outraged when it became clear that the machine learning algorithms were trained using various sources of images without being explicitly licensed for that purpose.
First, I will be clear: I completely agree that there are problematic copyright issues with using unlicensed work to train algorithms. The use doesn’t fit neatly within the existing tests for Fair Use, but it also doesn’t clearly appear to be an infringement. This is a new use for visual art, and our laws don’t explicitly address it. As a photographer myself, I’m not fully comfortable with my work being used to train algorithms whose product may create work that competes with my own. However, my uneasiness doesn’t mean that the technology world will freeze.
The copyright outrage is a temporary distraction in generative AI imagery. Ten years from now, when someone writes about our current time, they’ll note that for the first year or two when generative AI tools hit the market, many folks were upset about the copyright issues. But those copyright issues will be non-events in the bigger picture (pun intended). Generative AI is here to stay.
While some of the models I mentioned earlier have questionably-sourced training images, others do not. Adobe Firefly is probably the first of these, with Adobe having trained its algorithms using images in the public domain, freely licensed, or part of Adobe Stock (where Adobe’s rights include the ability to use it for this purpose). We already see Firefly integrated into Photoshop and Illustrator in the Creative Cloud beta software, and I expect it’ll be part of the non-beta public release in October at Adobe MAX. Meta is also working on its own generative AI tool, and reports are that it’s pretty good. They also indicate that it was created entirely with licensed images.
When multiple AI platforms exist without copyright concerns, the fact that a couple of platforms have copyright concerns becomes a platform-specific issue. It’s not an issue that prevents the other platforms from moving forward, and it’s not an issue that prevents the technology from gaining a strong presence in the world. It sucks for the photographers whose work was (arguably) infringed on, but regardless of those missteps by some companies, the technology is here, and we now see that it’s viable when properly-sourced images are used.
If most of your views on AI are still focused on the copyright issues of some problematic training data, you’re going to be stuck there while your colleagues and clients move forward.
Leave a Reply