My Blog
Entrepreneur

ChatGPT Is Hallucinating with Unexpected Responses This Week

ChatGPT Is Hallucinating with Unexpected Responses This Week
ChatGPT Is Hallucinating with Unexpected Responses This Week


This article originally appeared on Business Insider.

ChatGPT has been looking a little unhinged.

Some users wondered what the heck was going on with OpenAI’s chatbot after it started responding to their queries Tuesday with a whole lot of gibberish.

Sean McGuire, a senior associate at the global architecture firm Gensler, shared screenshots on X of ChatGPT responding to him in nonsensical “Spanglish.”

“Sometimes, in the creative process of keeping the intertwined Spanglish vibrant, the cogs en la tecla might get a bit whimsical. Muchas gracias for your understanding, y I’ll ensure we’re being as crystal clear como l’eau from now on,” ChatGPT wrote.

It then descended into much more nonsense: “Would it glad your clicklies to grape-turn-tooth over a mind-ocean jello type?” It followed up with references to the jazz pianist Bill Evans before repeating the phrase “Happy listening!” nonstop.

Another user asked ChatGPT about the variation between mattresses in different Asian countries. It simply could not cope.

One user who shared on Reddit their interaction with ChatGPT said GPT-4 “just went full hallucination mode,” something they said hadn’t really happened with this severity since “the early days of GPT-3.”

OpenAI has acknowledged the issue. Its status dashboard first said it was “investigating reports of unexpected responses from ChatGPT” on Tuesday.

It was later updated to say the issue had been identified and was being monitored, before a further update on Wednesday afternoon indicated that all systems were running normally.

It’s an embarrassing moment for the company, which has been considered a leader in the artificial intelligence revolution and received a multibillion-dollar investment from Microsoft. It’s also enticed enterprises into paying it to use the more advanced version of its AI.

OpenAI did not immediately respond to a request for comment on ChatGPT’s hiccups.

That hasn’t stopped people from speculating about the cause of the problem.

Gary Marcus, a New York University professor and AI expert, started a poll on X asking users what they thought the cause might be. Some thought OpenAI got hacked, while others reckoned hardware issues could be to blame.

Most respondents have guessed “corrupted weights.” Weights are a fundamental part of AI models, helping service the predictive outputs that tools such as ChatGPT give to users.

Would this be an issue if OpenAI was more transparent about how its model works and the data it’s trained on? In a Substack post, Marcus suggested the situation was a reminder that the need for less opaque technologies is “paramount.”



Related posts

5 Books That Will Change The Way You Think About Being an Effective Leader

newsconquest

Why You Should Onboard Clients Like You Onboard Employees

newsconquest

Gen Z Job Applicants Are Bringing Mom and Dad to Interviews

newsconquest