Opinion: A Real Look at Artificial Intelligence
What past technological upheavals reveal about the future of AI

This article appears in Issue 21 of CreativePro Magazine.
Before you break out the champagne to laud our new synthetic overlords (or begin stocking your survival bunker), take a step back for just a moment. Take a slow, deep breath and exhale.
Confirmation bias—our tendency to see only what we expect to see because of our preconceptions—accompanies every Next Big Thing. This is worth paying attention to because no one is immune. It skews our perception of what is real, and that messes with how we will plan for and react to AI, professionally and personally.
So, before confirmation bias kicks you into over-the-top enthusiasm or out-the-bottom dread, let’s consider what history can teach us about the arc of technological revolutions.
The Rise of the Machine
The Industrial Revolution changed the world by amplifying muscle power. Machines increased productivity by several orders of magnitude. Steam power enabled revolutionary changes in agriculture and manufacturing. Food production expanded exponentially, along with the means and speed of its distribution.
This new technology had both rabid opponents and uncritical believers, but there is an interesting parallel to the current commotion surrounding AI: the 19th-century expectation of intelligent mechanisms.
The idea seemed so plausible in the 1800s that a hoax called the Mechanical Turk, a supposed automaton that could play chess, fooled plenty of smart and educated people. Really, they fooled themselves. That was confirmation bias at work.
It is not coincidental that the Industrial Revolution also gave rise to a new model of life: the idea that a human being is nothing but a complicated machine and human thought a mere byproduct. Naturally, then, if a person were just a machine, then a machine that is just like a person would
be plausible.
For more than a century, pundits enthused or forewarned about mankind’s coming domination by machines, but familiarity breeds reality: No one today worries that a giant steam-powered machine (or a villain in possession of one) will take over the world, except in comic books. We no longer think of machines as an existential menace, nor as a panacea. They are tools, used by humans for good or ill.
Crunching the Numbers
If the Industrial Revolution amplified muscle power, the Computer Revolution amplified thinking power.
When electronic computers beeped, blinked, and whirred their way into popular consciousness in the 1950s and ’60s, they brought with them a new round of optimistic and apocalyptic predictions.
Mid-century magazines like Popular Science, newspapers, world’s fairs, Disney’s EPCOT Center, and the whole spectrum of what we now call sci-fi dusted off the old 19th-century visions of “humanity under siege” and of “humanity transcendent” and gave them a new coat of paint. It was a pundit’s paradise, in which reams of expert opinion added to the sum of human… opinion.
In speculative fiction, computers were heroes (Mycroft in “The Moon Is a Harsh Mistress”) or villains (HAL 9000 in 2001: A Space Odyssey). Star Trek, uniquely, featured a computer that was not also a plot character, but the TV show was ahead of its time.
To people on both sides of the fence, the complete takeover of society by computers wasn’t in doubt; only its desirability and its arrival date were debated.
The media, as they do, sensationalized any story related to computers, particularly if it was bad news: Computers would cause widespread unemployment and social chaos; they were medically harmful; they caused psychological damage. All these opinions made headlines at some point in the late 20th century. Some readers will remember the Y2K apocalypse-that-wasn’t.
We also are not living in a utopia, in case anyone hasn’t noticed.
Computers, like mechanical devices before them, became part of the furniture of life. They don’t run our lives the way the alarmists feared. We use them: A modern automobile is a computer network on wheels; a refrigerator keeps track of its own contents; LEGO makes smart bricks; the smartphone is our personal phone book, encyclopedia, and souvenir snapshot camera.
Professional designers, photographers, film editors, and sound engineers rely on computers to make a living, but in every walk of life the vector is “people running computers.” It isn’t “computers running people.”
Smaller Panics
Photoshop, some might remember, was going to destroy photography as a profession. Desktop publishing on personal computers would turn shopkeepers into layout artists and make designers obsolete. Micro-stock spelled doom for commercial stock photo houses and would ruin photographers’ livelihoods.
None of those things came to pass.
I’ve not seen dire warnings (outside of popular fiction) that virtual reality will make real life redundant, but I’m sure it’s just a matter of time.
The AI Revolution
Generative AI is the current cause célèbre. As with the Industrial and Computer Revolutions, arguments for and against are rampant. But neither side questions the arrival of the Singularity, only its birthday and what it will look like.
The definition of artificial intelligence has shifted since its origins at Stanford, MIT, and Carnegie-Mellon in the 1960s. Around the turn of the century, an AI was an expert system: software trained by human experts to diagnose disease, play Jeopardy, or calculate in-camera the ideal exposure for a scene. More recently, a deep learning system played the game of Go against itself hundreds of thousands of times and became good enough to beat the world champion.
There are many other types of AI. The ones in the news today are Generative Adversarial Networks (GANs) and Large Language Models, which are often conflated but are not the same thing.
The people who write for the media want punchy headlines to promote clicks, so on the news sites it’s all just “AI” without differentiation. That’s why AI seems to have a murky, mushy definition.
It’s hard to have a sensible discussion when you haven’t even agreed on a definition of what you’re talking about.
The Good, the Bad, and the Unevaluated
A great deal of good has already come from AI research:
- Without AI we would have been adrift trying to design antivirals and vaccines against COVID-19.
- Software engineers report a three- to tenfold increase in their productivity using ChatGPT to write the boring code, freeing them to concentrate on what is important, original, and creative.
- Promising research into the complex language of sperm whales is already underway using AI and large audio datasets.
AI is an extension of why we built computers in the first place: to amplify our thinking power. Keep this concept in mind. An AI is to thought what accounting software is to a business: You can operate without it, but more slowly.
Through machine learning, an AI program can find patterns and correlations in an ocean of data too large for one person or a large team to sift through. This is incredibly useful, but, amid the wonder and surprise, people often miss the fact that a human must understand and evaluate these patterns and correlations and decide whether to use them. Neither the AI itself, nor the hardware it runs on, can do that.
Yet here again, confirmation bias kicks in. Despite the obvious limitations and even “hallucinations” of the new AI systems (while I was researching this article, ChatGPT4 confidently referred to “feudal kings of 19th century Europe subjugating their vassals”), any output that looks like real thought is taken as proof of understanding, or awareness, or even consciousness.
It isn’t. It is emergent behavior: complex macro-phenomena that result from simple relationships among a large number of objects (symbols, in this case). Emergent behavior is a fascinating field of study in its own right but isn’t proof of awareness.
“I Don’t Understand”
GANs such as Stable Diffusion, Firefly, DALL-E, and Midjourney struggle with some types of images because they have no understanding of what an animal is or a person is. They simply correlate symbols from among huge networks of objects and parts. So, we get impossible variations on the theme of humans, “text” that isn’t text, and bizarre chimeras instead of the three separate animals we asked for in the prompt.
In the creative professions, AI is new and almost entirely unevaluated. We’re still oohing and aahing over DALL-E, ChatGPT, Firefly, and Photoshop’s new (at this writing) Generative Fill. The full utility of these systems as creative tools will emerge over time, although Generative Fill already has more adoring fans than your average rock band.
It will take a while for the excitement to settle down and some of the implications to sink in. Panic over copyright infringement and backdoor plagiarism is still a thing. There are publishers who have had to pause submissions while they work out how to reliably filter out the flood of AI-generated stories. On the other hand, there are writers, including me, who are delighted with the research potential of ChatGPT-powered search engines.
The good news is that Adobe is taking a very high ground when it comes to generative AI. The entire dataset behind Firefly and Generative Fill is properly licensed, and the company is working out how to ensure that artists whose work contributes to a generated output are properly compensated.
Can an AI Be Creative?
In 1843, Charles Babbage’s collaborator and muse, Ada Lovelace, wrote of his proposed Analytical Engine: “The engine might compose elaborate and scientific pieces of music of any degree of complexity or extent, and embodying any number of parts, with all the niceties and delicacies of expression, the requisite pauses, and varying, and contrasted strains, that adapt themselves to the theme or subject matter.” She was far ahead of her time, but it’s worth noting that the best music AI is nowhere close to achieving that today.
Creativity (usually used as a synonym for originality) is another arena in which confirmation bias comes into play. When an AI comes up with a brand-new correlation, a generated image, a phrase no one ever thought of, or a brilliantly “creative” Go move, confirmation bias says the AI has understanding and insight, perhaps even self-awareness.
This is “intelligent design” thinking applied to computer output. It ignores the fact that the same AI produces nonsensical, fictional, or even entirely hallucinatory results with equal confidence. It doesn’t consider that none of these results require conceptual understanding, only mechanical correlation. The AI engine behind ChatGPT is far closer to a very large, sophisticated version of your phone’s predictive text than it is to a human intellect.
Throw any large set of different items into a hopper, shake them up, and pour some out. New, never-before-seen combinations are inevitable.
This is hard to visualize because we don’t do well with concepts of very large numbers. Consider that a simple 3×3×3 Rubik’s Cube has more than 43 quintillion possible combinations. If every person on Earth mixed a Rubik’s Cube into a unique pattern once a second, our sun would burn out before all the possibilities had been explored.
What is also missed is the profound effect of emergent behavior, which is often unpredictable and unexpectedly sophisticated. We marvel at the synchronized, intelligent-looking movements of a huge flock of birds or shoal of fish, yet these can be perfectly replicated in a computer, using particle systems with very simple rules about how one particle interacts with just its immediate neighbors.
Just because something looks intelligent doesn’t mean it is. Confirmation bias and our human instinct to see patterns everywhere wield a powerful influence over our perceptions.
The Bottom Line
There is no doubt that AI will be used, abused, and weaponized.
It will make some professions productive beyond their most optimistic imaginings, while others will fade into obsolescence. Politicians will eventually get involved and screw it up. Practical and creative people will find ways to make it work for them. (Sadly, “making it work” probably won’t include replacing politicians, but don’t discount the possibility of a virtual candidate running for office.)
But AI will almost certainly follow the same arc as every other revolutionary technology in history. Today, we are collectively wowed and intimidated, but if we embrace the change and commit to learning its strengths and weaknesses, we will find increasing numbers of ways to use it.
Eventually, AI will be just another tool in our creative toolbox. Young creatives, 20 years from now, will wonder what all the fuss was about.
We Have Met the Enemy, and He Is Us
Internet rabbit holes, the rise of the cat video, doomscrolling, binge-watching. Computers and algorithms get the blame, but these are all the behavior and the products of people. Computers, AI algorithms, and the internet are merely facilitators.
When the dust settles, several years hence, it is AI’s usefulness to people and how it is used by people that will define its role. That it will be used is inevitable.
The big questions we must ask are, “Who will define what is useful, and how will they define it?” and “How will we use it?”
We don’t know the emergent behavior of humans using AI—we haven’t yet plumbed the emergent behavior of humans using social media—but we do know that, like any powerful new technology, it will need careful shepherding.
Being alert to our own preconceptions and confirmation bias will go a long way toward helping us steer AI in practical and useful directions.
Commenting is easier and faster when you're logged in!
Leave a Reply
Recommended for you

Illustrator Downloadable: Tropical Shadow Overlay Kit
Downloadables are an exclusive benefit for CreativePro members! (Not a member ye...

Illustrator Downloadable: Summer Citrus Pattern Set
Downloadables are an exclusive benefit for CreativePro members! (Not a member ye...

InDesign Downloadable: Tables SuperGuide
Downloadables are an exclusive benefit for CreativePro members! (Not a member ye...
Thank you, Alan. Well-thought out and reassuring for me. Time will tell, but I think you’ve got it for now.
This is a very, very simple way to talk about AI. But algorithms are not simple… In fact they can do a lot of problems! You can embrace AI, is your choice, but if I want to „love” something, first I must know him (or it) very, very well.
I think you didn’t quite understand what I wrote. The article is about people and their response to AI.