But people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into some strange territory. It showed signs of defensiveness over its name with a Washington Post reporter and told a New York Times columnist that it wanted to break up his marriage. It also claimed an Associated Press reporter was “being compared to Hitler because you are one of the most evil and worst people in history.”
Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the AI system. By trying to reflect the tone of its questioners, the chatbot sometimes responded in “a style we didn’t intend,” they noted.
Those glitches prompted the company to announce late Friday that it started limiting Bing chats to five questions and replies per session with a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI system and get a “fresh start.”
Whereas people previously could chat with the AI system for hours, it now ends the conversation abruptly, saying, “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.”
The chatbot, built by the San Francisco technology company OpenAI, is built on a style of AI systems known as “large language models” that were trained to emulate human dialogue after analyzing hundreds of billions of words from across the web.
Its skill at generating word patterns that resemble human speech has fueled a growing debate over how self-aware these systems might be. But because the tools were built solely to predict which words should come next in a sentence, they tend to fail dramatically when asked to generate factual information or do basic math.
“It doesn’t really have a clue what it’s saying and it doesn’t really have a moral compass,” Gary Marcus, an AI expert and professor emeritus of psychology and neuroscience at New York University, told The Post. For its part, Microsoft, with help from OpenAI, has pledged to incorporate more AI capabilities into its products, including the Office programs that people use to type out letters and exchange emails.
The Bing episode follows a recent stumble from Google, the chief AI competitor for Microsoft, which last week unveiled a ChatGPT rival known as Bard that promised many of the same powers in search and language. The stock price of Google dropped 8 percent after investors saw one of its first public demonstrations included a factual mistake.