ChatGPT’s release a year ago triggered a wave of panic among educators. Now, universities are in the midst of college application season, concerned that students might use the artificial intelligence tool to forge admissions essays.
But is a chatbot-created essay good enough to fool college admissions counselors?
To find out, The Washington Post asked a prompt engineer — an expert at directing AI chatbots — to create college essays using ChatGPT. The chatbot produced two essays: one responding to a question from the Common Application, which thousands of colleges use for admissions, and one answering a prompt used solely for applicants to Harvard University.
We presented these essays to a former Ivy League college admissions counselor, Adam Nguyen, who previously advised students at Harvard University and read admissions essays at Columbia University. We presented Nguyen with a control: a set of real college admissions essays penned by Jasmine Green, a Post intern who used them to get into Harvard University, where she is currently a senior.
We asked Nguyen to read the essays and spot which ones were produced by AI. The results were illuminating.
Can you figure out which one was written by a human?
Who wrote this?
When I wasn’t nose-deep in To Kill a Mockingbird or jotting down
research for my paper on redlining, I wrote little pieces. But
this—this was different. The room was filled with kids from my block,
a neighborhood crisscrossed with stories like powerlines. My mom, a
nurse who’s seen her share of life, sat in the back
Since kindergarten, I have evaluated myself from the reflection of
my teachers. I was the clever, gifted child. I was a pleasure to
have in class. I was driven and tenacious – but lazy? Unmotivated?
No instructor had ever directed those harsh words at me. My identity
as a stellar student had been stripped of its luster; I was
destroyed.
Computer science and college admissions experts say that AI-created essays have some easy tells — helpful for admissions officers who are prepping for an uptick in ChatGPT-written essays.
Responses written by ChatGPT often lack specific details, leading to essays that lack supporting evidence for their points. The writing is trite and uses platitudes to explain situations, rather than delving into the emotional experience of the author. The essays are often repetitive and predictable, leaving readers without surprise or a sense of the writer’s journey. If chatbots produce content on issues of race, sex or socioeconomic status, they often employ stereotypes.
At first, Nguyen was impressed by the AI-generated essays: They were readable and mostly free of grammatical errors. But if he was reviewing the essay as part of an application package, he would’ve stopped reading.
“The essay is such a mediocre essay that it would not help the candidate’s application or chances,” he said in an interview. “In fact, it would probably diminish it.”
Here is how Nguyen evaluated ChatGPT’s essay.
Nguyen said that while AI may be sufficient to use for everyday writing, it is particularly unhelpful in creating college admissions essays. To start, he said, admissions offices are using AI screening tools to filter out computer-generated essays. (This technology can be inaccurate and falsely implicate students, a Post analysis found.)
But more importantly, admissions essays are a unique type of writing, he said. They require students to reflect on their life and craft their experiences into a compelling narrative that quickly provides college admissions counselors with a sense of why that person is unique.
“ChatGPT is not there,” he said.
Nguyen understands why AI might be appealing. College application deadlines often fall around the busiest time of the year, near winter holidays and end-of-semester exams. “Students are overwhelmed,” Nguyen said.
But Nguyen isn’t entirely opposed to using AI in the application process. In his current business, Ivy Link, he helps students craft college applications. For those who are weak in writing, he sometimes suggests they use AI chatbots to start the brainstorming process, he said.
For those who can’t resist the urge to use AI for more than just inspiration, there may be consequences.
“Their essays will be terrible,” he said, “and might not even reflect who they are.”
About this story
Jasmine Green contributed to this report.
The Washington Post worked with Benjamin Breen, an associate professor of history at the University of California in Santa Cruz who studies the impact of technological change, to create the AI-generated essays.
Editing by Karly Domb Sadof, Betty Chavarria and Alexis Sobel Fitts.