Generative AI for Digital Artists: Miracle Cure or Poisoned Chalice?

Generative AI has come a long way in a short time, but what does it mean for someone trying to make a living as a digital artist?

AI image generation has become ubiquitous. It populates social media posts and illustrates editorials, both in print and online. But what does this mean for the digital artist?

The rise of AI image generation has been fast. Terrifyingly fast. Seemingly appearing out of nowhere, in an astonishingly short space of time the technology has progressed from the absurdly implausible to the utterly convincing.

Figure 1 shows five results from the same image prompt using successive versions of the AI image tool Midjourney. The speed of progress is shocking: the first image was generated on March 14, 2022; the last, using Midjourney version 5, appeared on March 16, 2023. In almost exactly a year, AI image creation had gone from a useless assembly of shapes to a result that’s indistinguishable from traditional photography.

Figure 1. Midjourney-created images, from version 1 to version 5, just one year later.

It’s not just Midjourney. Dozens of online resources, from ChatGPT to Freepik, allow anyone to generate any image they can imagine, on demand. The skill of the photomontage artist has been replaced by the skill of the prompt generator. But does this mean there’s now no room for traditional photomontage?

A Lifetime of Photomontage

I’ve been a photomontage artist since 1989, when I was commissioned to create an image of Queen Elizabeth II for the now-defunct satirical magazine Punch (Figure 2).

Figure 2. My first commissioned photomontage image, from 1989.

It was created in Image Studio, in the year before Photoshop first appeared on the Mac platform. The original photograph was scanned on a video camera—there were no desktop scanners then. Image Studio had no layers, and only basic image transformation tools. This led to a regular photomontage cartoon strip in Punch, followed by a weekly strip in The Guardian newspaper. Other newspaper and magazine work quickly followed.

In 1993 my second son was born, in the same week that I attended a press briefing at which Photoshop version 3 introduced the concept of layers. At the time, I wondered which would have a greater impact on my life. The jury’s still out.

If I were a photomontage artist today, struggling to get a foothold in the industry, I’d be despondent. What’s the point of developing my skill set, of painstakinly choosing, creating and arranging image elements, when an art editor can simply type a prompt and get multiple variations in seconds? Is there still a place for traditional workmanship in an increasingly AI-generated field? I would say yes.

AI isn’t always best

Images generated by AI can be perfect. Realistic human figures, cars and hamburgers, dogs and spaceships: it seems there’s nothing that AI can’t serve up in an instant.

But the main thing AI images lack is soul. They can be hugely complex, with far more image elements than even the most assiduous digital artist would have the time or patience to include. And yet they so often miss out the human element, the sense that an artist with a guiding hand has placed all the components to produce the best emotional response in the viewer.

The purpose of images in editorial is to tell a story. Usually, it’s to entice the reader to read the article to which they’re attached. Editorial images aren’t works of art, they’re advertisements for the copy which follows. The skill of a digital artist lies in interpreting a story in a visually appealing way, in conveying the essence of a feature so that it can be taken in at a glance. AI images, for all their gloss and perfection, frequently fail to engage.

AI works from reference images

Write a prompt for an AI image engine and it will begin by searching its vast archives for images that match the words you use. But if you stray from the real to the imaginary, these tools can struggle to find a match.

Some years ago I was commissioned to create an image for Reader’s Digest magazine to illustrate a story about placebos. The point of placebos, of course, is that they contain no medication. So how do you illustrate something that isn’t there? The solution I came up with was a medicine bottle pouring out a handful of pills made of glass: pills that clearly contained no content (Figure 3).

Figure 3. How to illustrate placebos? For Reader’s Digest magazine.

It was a difficult image to make: I modeled the pills in Illustrator and then imported them into Photoshop (in the days when Photoshop still incorporated 3D tools), so I could rotate them to different angles. I then manually added refracted views through the pills, as well as shine and highlights.

Out of interest, I wrote a prompt for both ChatGPT and Photoshop’s Firefly AI generation tool, to see how close they could come to my original. The best Photoshop could manage was reasonably close to my version—except that the pills aren’t made of glass, the hand holding the bottle isn’t anatomically possible, and the open hand looks more like an illustration than a photograph (Figure 4).

Figure 4. Placebos, AI-generated in Photoshop

ChatGPT fared better, with a truly realistic hand. The pills are beautiful, with perfect shine and shadows; even the light source is projected through them onto the hand (Figure 5).

Figure 5. Placebos, AI-generated by ChatGPT

And yet… these aren’t pills. They’re glass spheres. Why? Because nowhere in the Large Language Model that fuels AI’s imagination are there images of glass pills. They don’t exist in the real world, and AI doesn’t understand how to create them.

AI image creation may be quick, but it isn’t perfect. It lacks the visual imagination that marks a competent digital artist.

Photoshop Artists Need AI

Photoshop introduced its AI tools gradually, with image recognition capabilities that started slowly but which have since built into a technology that’s of genuine benefit. And nowhere is this more the case than in object selection.

One of the most laborious, time-consuming, and fiddly jobs the Photoshop artist used to have to master was cutting out images from their background. Whether you used the Pen Tool, Quick Mask or the Lasso Tool, tracing the outline of complex objects was a large part of the photomontage process. Now, that part of the job has become vastly easier. Photoshop is capable of understanding the components of an image with consummate skill, even when at first glance the task seems impossible.

A complex background

Here’s an image of a falconer at a French theme park (Figure 6).

Figure 6. A tricky cutout…

Rather than being set against a clear blue sky, which would have been a simple enough job for the Magic Wand Tool, I photographed it with a background of hundreds of onlookers. Even a year ago, this would have defeated Photoshop. And yet now, it’s able to separate the foreground from the background in seconds (Figure 7).

Figure 7. …achieved in seconds in Photoshop.

It makes sense; Adobe has trained its Firefly engine on thousands of pictures of people and animals, so it knows what it’s looking at. It understands clothing, and hats, and gloves, and even correctly interprets the bird’s talon that isn’t quite perched on the man’s hand.

But what happens when you give Photoshop an image of something it’s never seen before? Firefly can never have come across this steampunk bicycle (Figure 8).

Figure 8. An impossible cutout by traditional means

And yet, remarkably—astonishingly, even—Photoshop has been able to discern what’s bicycle and what isn’t. A cutout such as this, if done manually, would have taken even the most adept Photoshop artist hours to complete. And yet Photoshop gives us a near-perfect result in seconds.

The Uncrop Tool

One of the most valuable AI features in Photoshop is its ability to “uncrop” an image. Here, I wanted to use an image of a pizza, and found this one in a royalty-free image library (Figure 9). It had all the juiciness I was looking for, but it was only half a pizza.

Figure 9. Half a pizza…

The Crop Tool, using its Generative Expand option, was able to reveal the entire pizza, almost as if it had been there in the original photograph (Figure 10).

Figure 10. …cooked up by Photoshop’s Firefly engine.

The cheese, crust and toppings are interpreted, rather than just copied from elsewhere in the image. The lighting is perfect, and the pizza casts a shadow on the generated objects to its right. To have created this image using traditional Photoshop techniques—copy and paste, clone, patch­—would never have produced such a convincing result.

Remove distractions

Photoshop’s ability to find and remove unwanted image elements is a real time-saver. But it’s not perfect (yet). I photographed this temple in Egypt, but didn’t want all those people in the way (Figure 11).

Figure 11. Can we get rid of the tourists?

It was easy enough to use the Quick Selection Tool to find and remove all the people (Figure 12).

Figure 12. Yes, but their shadows remain.

But the shadows? That’s another matter. Here, Photoshop not only failed to remove them on the first pass, it proved impossible for Firefly to remove the shadows even when they were selected individually. Traditional Photoshop skills would still be required to complete this image.

Generating people

I was commissioned recently to create a cover for a novel in which a key moment is the discovery of a body in the bottom of a swimming pool in rural France. I made the (probably rash) decision to view the scene from the bottom of the pool, looking up at the pool’s owner standing next to a French policeman (Figure 13).

Figure 13. Bottom-up views of people, generated in Photoshop

None of my extensive image library searches were able to produce anything approaching the views of the people I wanted. But Photoshop, with some trial and error, was able to create the ground-up views of the two human elements.

This isn’t full automatic image creation. This is using the technology to make the components I needed in order to complete the montage. The refracting surface and floating leaves helped to conceal any AI awkwardness.

The Future for Digital Artists

The prospects for photomontage artists may not be quite as bleak as first appeared. AI is a useful tool, and it has taken a lot of the grunt work out of Photoshop: with AI doing the heavy lifting, it leaves more time for artists to concentrate on content and composition.

There are bound to be online publications for whom a quick-and-dirty AI solution is acceptable and more cost-effective than paying an artist to come up with roughs before progressing to the finished artwork. But for reputable magazines and newspapers, books and album covers, traditional artists can still offer something that eludes AI image generators. Call it soul, humanity, or empathy; it boils down to the fact that real, flesh and blood humans still have a better understanding of how to communicate to other humans.

More Resources To Master Design + AI

The Design + AI Summit is the essential how-to event to help designers leverage the power of AI to create quality work more efficiently.

With 12 great sessions from internationally renowned experts, you will take away practical techniques to help you master a wide range of GenAI tools.


LEARN MORE

Members get a special discount on registration! Sign up today.

Bookmark
Please login to bookmark Close

This article was last modified on October 20, 2025

Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading comments...