My Blog
Entrepreneur

17 Doctors Didn’t Diagnose Her Son’s Disorder. ChatGPT Did.

17 Doctors Didn’t Diagnose Her Son’s Disorder. ChatGPT Did.
17 Doctors Didn’t Diagnose Her Son’s Disorder. ChatGPT Did.


During the pandemic lockdown, a mother’s 4-year-old son began experiencing pain, which ultimately launched a three-year search to unearth the source of her son’s agony.

“We saw so many doctors. We ended up in the ER at one point. I kept pushing,” the mother told TODAY. “I really spent the night on the (computer) … going through all these things.”

After countless unsuccessful attempts at diagnosis, from seeing dentists, pediatricians, and physical therapy specialists, all the mother’s questions were finally answered — by ChatGPT.

Despite the 17 doctors the family saw over the three-year period, the mother told TODAY that the professionals only ever offered referrals or solutions as they pertained to their specific area of expertise — not the big picture, and always left the mother with no diagnosis.

The mother shared intimate details of her son’s pain with the chatbot, including information from his various MRIs, and it suggested that maybe the diagnosis was tethered cord syndrome (a neurological disorder restricting the movement of one’s spinal cord, and subsequently causing pain).

When the AI chatbot suggested tethered cord syndrome, she says it “made a lot of sense.”

“I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she told TODAY. “I put the note in there about … how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”

After ChatGPT’s diagnosis, the mother joined a Facebook group of partners whose children have the condition and found similarities between her son and theirs. She then sought out a neurosurgeon specializing in the disorder, who, like ChatGPT, confirmed the tethered cord syndrome diagnosis.

Related: What Can ChatGPT Do for Healthcare Practices?

Since its launch in November 2022, ChatGPT has sparked controversy over its widespread use and potential risks — raising concerns of plagiarism, cheating, legal implications, and potential harm to humanity. However, the chatbot was built and programmed to do one job: give an answer to a question, and it is scarily good at doing it with near-immediate speed.

In the mother’s case, the chatbot proved useful because, despite the doctors she visited, their knowledge was limited to their specific medical expertise, whereas ChatGPT encompasses millions-worth of information beyond a single area.

Still, just because ChatGPT can be less “results heavy” for some users than using a search engine, or going from one specialist to another, it’s not always correct, and its creator, OpenAI has openly disclosed that the chatbot is subject to errors and bias.

Also, as it pertains to medical issues, experts warn that, despite the possible benefits, it doesn’t replace a human doctor.

“It’s not wrong to use these tools,” Dr. Byron Crowe, an internal medicine physician at the hospital, told The New York Times. “You just have to use them in the right way. It’s a great thought partner, but it doesn’t replace deep mental expertise.”

Related: ‘If This Technology Goes Wrong, It Can Go Quite Wrong’: OpenAI CEO Sam Altman Speaks to Lawmakers About AI Risks, Says Government Intervention Is ‘Crucial’

Related posts

9 Effective Strategies to Boost Your Productivity at Work

newsconquest

7 Strategies for Growing Your Business When Supply Chain Disruptions Are Everywhere

newsconquest

She Was a Regular at a Struggling Coffee Shop. Then She Bought It, and 4X the Annual Revenue to $1.8 Million.

newsconquest