My Blog
Food

ChatGPT Can Replicate the Tropes of Asian Diaspora Food Writing

ChatGPT Can Replicate the Tropes of Asian Diaspora Food Writing
ChatGPT Can Replicate the Tropes of Asian Diaspora Food Writing


As if there weren’t already enough “stinky lunchbox” stories in the canon of diaspora food writing, another argument in favor of putting that tired trope to rest: ChatGPT, an OpenAI chatbot that can algorithmically generate pieces of writing and conversation, is capable of nailing down the narrative exactly, churning out an essay I wouldn’t be surprised to read in a food publication during a heritage month.

In response to the prompt “write a stinky lunchbox immigrant memoir,” ChatGPT generates a few paragraphs. “As a child, I remember always feeling like the outsider in my school,” it begins. “I was one of the few kids who came from a family of immigrants and I often felt like my lunchbox was a glaring symbol of my difference.”

The story continues: I was made fun of for my sandwich (the food choice a failure of the model, perhaps) and felt alone. As I got older, I appreciated diversity. I understood the lunchbox “not as a source of shame, but as a representation of my unique heritage and the love and care that went into every meal my mom packed for me.” I end up thankful for the stinky lunchbox, and for “the lessons it taught me about tolerance, acceptance, and pride in who I am.”

Add the descriptor of “Filipino immigrant” to the prompt, and ChatGPT gets even sharper. Instead of the idiosyncratic sandwich, it is adobo, pancit, and fried plantains that fill the cafeteria with smells. Fermented fish sauce, ripe mangoes, and stinky tofu “all combined to create a pungent aroma to clear a room,” it writes. There are cracks in the success of this version: Stinky tofu isn’t a particular Filipino dish, and ripe mangoes don’t generally appear in adobo or pancit, despite their boundless goodwill in diasporic imaginations. And yet, it’s easy to understand the AI’s confusion here — the ripe mango, at least, is such a common fixture of diasporic writing that work like that of poet Rupi Kaur is often derisively referred to as “mango diaspora poetry.” (ChatGPT succeeds at this prompt as well.)

I had to ask ChatGPT about another common narrative in diaspora food writing: the one where immigrant parents never say “I love you,” but they do cut fruit. I have never found much relatability in this one, personally, but for a prompt, I wrote, “Write a diaspora memoir about Asian parents cutting fruit.” Once again, ChatGPT succeeds, generating paragraphs about growing up in a “traditional Asian household,” in which my parents meticulously chose perfect fruit and arranged them neatly as a gift to me. “A metaphor for my parents’ love and care for us,” it states, the act of cutting fruit allowed us to “connect with our cultural roots, even as we navigated the challenges of life in a new country.”

To be clear, neither the “stinky lunchbox” nor the “cut fruit as love language” narrative is necessarily bad, and neither would be as popular as literary devices if the experiences weren’t real to so many people. But if the point of continuing to tell these specific stories is to help us unravel anything interesting about the human condition, too many of these iterations disappoint. In response to my tweets, one person described the generated texts as “so bland,” unable to beat “real human writers.” I don’t disagree that ChatGPT’s works are bland, but here, I think, lies the problem: These texts, spat out soullessly by AI, are bland because the real writing — and the real thinking — on which the model has trained is itself bland, forcing itself into a narrative arc so predictable that AI knows exactly which notes to hit.

As my colleague Jaya Saxena wrote about the “stinky lunchbox moment,” there is a tendency among writers to whittle our nuanced real-life experiences into their “most obvious and recognizable parts,” with this trope-ificiation conveying racialized trauma that is ultimately palatable to white readers. Indeed, we have troped our way to the point that simply mentioning “Asian parents” in my prompt has ChatGPT grasping at stereotypes; they run a “traditional Asian household,” for example. It’s only natural for marginalized writers to latch onto the more obvious and recognizable parts of our experiences in order to find those who can relate — but then, look at the way we end up boxing ourselves in.

The fairly basic nature of ChatGPT’s generated texts has led to recent speculation about the future of the college essay and of academia. Of course, in the background of all this is the broader question: What does this mean for writers, the human ones? (This conversation is also playing out around artists, as the Lensa AI app gains traction.) Digital media and publishing alike are already both precarious enough, without the looming threat of AI getting better at potentially doing our jobs. If AI can crank out good-enough diaspora food writing — the kind with culture clash, and in which food stands in for bigger ideas about identity — what happens next?

Maybe it is a sign for us to think ourselves out of these tropes, to look past the most obvious and to consider what might be more interesting. I do think there is a more generous read on this little ChatGPT experiment: that the replicability of these stories can be an opportunity to push us ideologically, as opposed to leading us to despair about our careers. In craft and in concept, shouldn’t we find it spiritually unfulfilling to weave a narrative that is so easy to predict and so neat in its realizations? And what is the point in making work that is so similar to all that came before it? Perhaps what ChatGPT can offer us is a chance to see how our writing should, from here, diverge.



Related posts

Trompe l’Oeil Corn Desserts Are Having a Moment

newsconquest

Carrefour CEO seeks to defuse Brazil meat spat

newsconquest

How Pecking Space Makes One among NYC’s Best possible Fried Hen Sandwiches

newsconquest