Fake? Sure. But since when has the holiday card, that stressful annual performance of family joy, been entirely about reality?
AI lets you express what you’re feeling even if you can’t capture it IRL. Gather far-flung family into one scene. Take the vacation you couldn’t afford. Go ahead, turn yourself into an elf. But AI is digital drag: You are no longer constrained by the laws of time and space to tell your story.
The AI was hard to control, including once when it decided on its own to draw me without pants. Yet crafting my card left a much bigger impression than I expected. AI took me on a trip into childhood to re-create memories of my grandmother Margaret’s glowing Christmas decorations. Here I’m sharing a version of my card that features me alone, though I also used the AI to make my squirmy 2-year-old pose for a family portrait in front of his great-grandma’s tree. (What would you make? Send me an email.)
This is all possible because AI tools have hit an inflection point. You no longer need to master coding or advanced Photoshopping to generate pictures wholly out of your imagination. And I’m not the only one using it for the holidays: AI selfie-generating app Lensa added a Christmas option that lets you transform yourself into a snow bunny or hunky Santa. And Coca-Cola has a website where you can use a version of OpenAI’s DALL-E image generator to make the soda brand’s classic Santa do whatever you want — as long as it’s not too naughty.
If some of this feels worrisome, I hear you. Lately, I’ve been exploring what role AI can, and should, play in our lives. In my experiments, I’ve discovered many ways the tech industry’s race to throw AI into everything can do harm, from fueling eating disorders to upending democracy. While making my Christmas card, biases baked into AI software reared their ugly head by making my family look too White and even too attractive. And sometimes AI literally made ugly heads with crossed eyes and deformed features, along with extra fingers and legs, because of a bizarre quirk of how it creates.
Yet it’s also true that my final product was a total delight. There’s something more authentic about this fakeness than, say, an Instagram filter. The viewer is in on the joke. Making an AI Christmas card helped me see not only the limitations of AI, but also its possibilities.
How I made an AI Christmas card
You might think AI is all about automation and saving time. Let me correct the record: Making my Christmas card with AI was a lot of work. I ended up using not one but three different commercial AI tools, each of which comes with a small price tag and a learning curve.
But ultimately, I discovered I could get as much out of AI as I put into it.
It started with me typing “make me a whimsical family Christmas card” into ChatGPT. That’s the primary way you interact with DALL-E 3, OpenAI’s image-generating AI.
Out popped an illustration about as unremarkable as Christmas cards you’d find in the drugstore bargain bin, featuring a family that looks nothing like mine holding gingerbread men. “Try again,” I typed. This time I got a picture of a mom feeding a snowman a carrot while a boy stuffs a carrot into the snowman’s belly.
Without really understanding what a Christmas card (or even a snowman) means, the tech was trying to find statistically middle-of-the-road responses to my prompt based on piles and piles of training images. I needed my own artistic vision.
So I chatted back and forth with the AI for days, with increasing specificity. “Make it a Victorian Christmas,” I typed. “More gothic.” I tried “Pixar-cartoon Christmas” and “80s movie” themes. It was like a conversation with a slot machine: After each new chat, I waited a few moments to see how it interpreted my words. Sometimes I got flashes of brilliance; other times I got 11 fingers and three legs.
I wanted my card to make viewers do a double take, but in a good way, so I headed in the direction of photographic style in an unreal setting. I struck gold with the prompt: “Wes Anderson style surrealistic photo Christmas card.” The AI began setting family portraits in candy-colored rooms with elaborately decorated trees, garland — even Rudolph’s head mounted on the wall.
One version with a 1960s vibe reminded me of how my grandmother Margaret used to decorate for Christmas. So I told the AI to generate some of the details I could recall, such as an Evergleam Christmas tree and pink reindeer guarding the presents. “More colorful lights reflecting around the room,” I typed, recalling how I used to crawl underneath the aluminum branches and watch a rotating color wheel turn everything rainbow. I wasn’t just making a picture; I was reassembling a memory.
Now I needed to get my likeness in there. I described myself to the chatbot, but it kept spitting out fitness models. I told it my real age and about my salt-and-pepper hair, but that still didn’t help. AI has a “hotness” problem, rooted in training data with too many perfect selfies and models. “No, fatter,” I kept typing. (“You look like you got a jump on New Year’s resolutions,” said one disapproving friend after receiving my card.)
Finally, I landed on the right feeling with a prompt that totaled 153 words — even though the AI me, sitting on top of a large present, had too many fingers and slightly mismatched shoes. (Someday, I imagine, I might look at my extremities here and say: “How 2023.”)
There was a bigger problem: The AI me still had someone else’s face. DALL-E won’t intentionally let you re-create the likeness of a real person, a guardrail against deepfakes — images designed to mislead or harm because they feature someone else’s likeness. There are more technically complicated ways I might have trained my own AI to clone our faces, but I wanted to stick with widely accessible tools.
I turned to a dedicated AI avatar program to make my family’s faces, the Lensa app. I fed it 20 reference photos of each family member’s face, and its “magic avatar” spit back out 50 AI-generated profile photos in different styles, such as business, Roman Empire and, now, Christmas.
Mine were laughably dashing, but close enough to reality. It had more trouble getting the faces right for members of my family who are of Asian descent, often making them stray too far from their actual facial features or just look too White. This bias is probably baked into the AI model the app uses, called Stable Diffusion. (Lensa’s maker didn’t respond to an email.)
At last I had my background and my faces, but I needed to combine them into one. For that, I turned to one more AI tool: Photoshop. The classic photo-editing app this year added a function called “generative fill” that lets you select specific areas in an existing picture to have generated by AI.
I cut out the heads from the Lensa images, Photoshopped them into the DALL-E image and let generative fill stitch them together in a way that looks more natural. I also used that tool to fill and fix extra little details. I left the weird fingers and a few other Easter eggs for fun.
Three AIs later, I had my Christmas card. All together, it cost me $49 and a lot of free time.
I sent it to my dad. “Looks like a classic Margaret Fowler Christmas decoration!” he said.
Fake? Sure. But also more real than any photo I could have taken on my own.