What if you saw an adorable photo of an amazing sand sculpture showing a dozen different dogs - all perfectly rendered, with not one grain out of place? What if a platter of cut fruit was meticulously arranged in the flawless form of a dragon for a Chinese New Year celebration?
What if the glamorous, fishnet-clad leg lamp from A Christmas Story was wearing a doofy neon Croc instead of an elegant black patent pump?
We'd certainly take a picture. We'd probably even share that picture online, where others would react with amazement.
But they shouldn't. In 2024 - and, we must assume, going forward - such images are increasingly likely to originate from AI. Fake images should be pretty easy to spot: they're a little too smooth; a little too slick. A little too perfect. Yet people continue to fall for it.
And not being able to tell the difference between AI and photographic or video documentation of real people, places, and objects will become a disastrous and dangerous handicap.
There's been a lot of hand-wringing over the worry that AI will replace the technical expertise and interpretive brilliance of our best visual artists, writers, and filmmakers. It might even replace the workaday graphic designers and copy-writers who don't exactly stand on riverbanks painting landscapes waiting to go down in art history, but who nonetheless keep the world's businesses going with websites, product packaging, public signage, and much more - or so the thinking goes.
But all it takes is a brief thought exercise to see that this reality is nowhere near: would you want to patronize a dermatologist or a jeweler or a construction company who advertised with a poster of six-fingered people? Would you watch a movie or buy a wall hanging of people who have extra hands or legs? Probably not. Would you be interested in watching a full-length feature film in which you can't determine which body parts belong to which person? I don't think so.
AI still has all of these bugs and more. So I think we artists and writers are safe for the meantime. It's comparable to suggesting that Google Translate obviates the need for multilingual interpreters: nuance still matters a great deal. In both cases, there's still no substitute for putting human brains on the job.
No, the real danger is that of assuming anything we see with our own eyes is real. And it sure seems that a whole bunch of us do just that.
In an era when fake news and disinformation regularly spam both right- and left-wing news sources, an already-troubling number of folks seem to have no skepticism at all about what they read. If it fits with their worldview, they swallow it whole - sometimes with disastrous effects.
But reading something critically, examining it with skepticism even when you don't want to be skeptical, and researching other news coverage to validate or disprove each potentially-questionable claim you come across is hard! Or at least, it's time-consuming. Pictures, though? Pictures we can trust ... right?
I'm sorry, but no.
For awhile now, women my age have long been taught that many of the images we grew up with are doctored as a matter of course. As a teen, I learned that lighting, makeup, and forced perspective made models in fashion magazines appear variously waiflike, absolutely ripped, cellulite-free, or endowed with cheekbones an Easter Island head would envy. Those techniques were soon supplanted by Photoshop, which carved improbable thigh gaps and wasp waists out of thin air. And now, of course, we have filters, professional lighting in private homes, and more photo-editing tools in our purses at age 40 than the professionals had access to when we were children.
So some of us are perfectly prepared for this. Case in point: hordes of terminally-jaded millennial women recently scrutinized a recent family portrait of Catherine, Princess of Wales, and her children, finding an unprecedented 16 editing errors, after she'd been out of the public eye just a little too long, facing health issues. It was painful at the time, and it got worse: it's since been reported that the Princess has cancer.
Once the editing errors had been spotted, the Associated Press, who had vouched for the image, took the incredible step of retracting the photo, saying it had been "manipulated by the source." It came as validation to those who know their photo-editing.
Yet, as invasive and generally icky as this dogged debunking seemed, the whole episode can be instructive for some consumers of media. And, I'm sorry to say, the folks in need of such instruction tend to be older, less media-savvy folks.
There's probably no harm in believing that an especially beautiful or well-conceived design on a charcuterie board is real when, in fact, it was generated by AI. And what would it hurt to indulge in a little magical thinking - to believe that maybe a cloud formation in the shape of, say, a too-perfect floof of a dog happened on its own? After all, we can't lose our sense of wonder at the world.
But what if an image were created to show your Congressperson doing a line of coke, or lasciviously grabbing someone who wasn't their partner? What if the viewer didn't notice the six fingers?
What if an image were created which misrepresented the devastation of a war?
How about a video, or a phone message, in which the President's voice had been cloned and was telling people not to vote in an election? That's already happened.
If you witnessed such a thing and were determined to believe your eyes - or your ears - above all else, you might very well donate money to the campaign fund of a candidate you admire less, or to a fraudulent "aid group." It could affect your vote.
Actually, it could affect the votes of thousands of people or more. You could end up selecting someone for office who had deliberately sabotaged their opponent.
And that would only affirm for agents of chaos and malice that this is a great tactic to keep using - making the problem worse and worse.
Now, is all of this fair? Is it fair that you should have to scrutinize and fact-check so much of the information you consume, especially if you've lived a significant portion of your life in a less technology-bound era? Of course not. But here we are, and we have to adapt.
So take in your media with a healthy dose of skepticism - and hey, when you do see a cloud that looks just like a dog, or a pineapple, or John Lennon, absolutely take a photo!
We can't afford to forget what real life looks like.
I think poets are safe from AI for the time being. Even when AI attempts what are essentially greeting-card rhymes, the resulting poems are comically inadequate.