Google said its Bard chatbot can summarize files from Gmail and Google Docs, but users showed it falsely making up emails that were never sent. OpenAI heralded its new Dall-E 3 image generator, but people on social media soon pointed out that the images in the official demos missed some requested details. And Amazon announced a new conversational mode for Alexa, but the device repeatedly messed up in a demo for The Washington Post, including recommending a museum in the wrong part of the country.
Spurred by a hypercompetitive race to dominate the revolutionary “generative” AI technology that can write humanlike text and produce realistic-looking images, the tech giants are fast-tracking their products to consumers. Getting more people to use them generates the data needed to make them better, an incentive to push the tools out to as many people as they can. But many experts — and even tech executives themselves — have cautioned against the dangers of releasing largely new and untested technology.
“There’s a really bad sense of FOMO among big tech companies that want to do AI, and they don’t want to miss out on generating an early audience,” said Steve Teixeira, chief product officer of Mozilla and a former executive at Microsoft, Facebook and Twitter. “All of them know these systems are not perfect.”
The companies say that they’ve been clear that their AI is a work in progress, and that they’ve taken care to build in guardrails to stop the tech from making offensive or biased statements. Some executives, such as OpenAI CEO Sam Altman, have argued it’s better to get people using AI tools now to see what kind of risks they come with before the tech becomes more powerful.
Requests for further comment were declined.
But the speedy and flawed rollout is at odds with months of warnings — including a one-sentence statement signed by hundreds of experts saying AI poses risks to humanity on par with nuclear weapons and pandemics — that companies should be more cautious. Concerns range from the near term, such as AI infusing more sexist and racist biases into important tech, to longer-term fears of a sci-fi future where AI surpasses human intelligence and begins acting on its own.
Regulators have already taken notice. Congress has held numerous AI meetings and hearings and multiple bills have been proposed, though little concrete action has actually been taken against the companies. Last week, executives including Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg gathered to answer questions from lawmakers, who have said they plan to draft legislation to regulate the technology.
European Union lawmakers are moving ahead on regulation that would ban some uses of AI, such as predicting criminal behavior, and creating strict rules for the rest of the industry. In the United Kingdom, the government is planning a major summit in November for AI and government leaders to discuss global cooperation.
But regulation should be balanced with leaving space for companies to invent beneficial tech, British Finance Minister Jeremy Hunt said in an interview this week during a visit to San Francisco.
“Competitive tension is behind most technological advances,” he said. “We need to be really smart in the way we construct a regulatory environment that allows innovation to happen whilst making sure there are sufficient guard rails.”
(Amazon founder Jeff Bezos owns The Post. Interim CEO Patty Stonesifer sits on Amazon’s board.)
Most falls, companies from Apple to Microsoft use the season to unveil new devices just in time for the holiday shopping rush. This year, the focus is on AI.
Many people first directly experienced the latest generation of AI when OpenAI launched ChatGPT last November. The tech behind chatbots and image generators is trained on billions of lines of text or images scraped from the open internet. ChatGPT’s ability to answer complex questions, pass professional exams and have humanlike conversations sparked a surge in interest, and the biggest companies scrambled to respond.
But there are still problems. Chatbots routinely make up false information and pass it off as real, an issue AI experts refer to as “hallucinating.” Image generators are improving rapidly, but there is still no consensus about how to stop them from being used to create propaganda and disinformation, especially as the United States rushes toward the 2024 elections.
Reams of copyrighted material were used to train the bots too, prompting a wave of lawsuits accusing the tech companies of theft that raises questions about the legal underpinnings of generative AI. Just this week, some of the world’s best-known novelists banded together to sue OpenAI for using their work to train its AI tools.
Microsoft, a leader in the AI race, launched its first version of its Bing chatbot in February, pitching it as a potential replacement for search engines because it could answer conversational questions on almost any topic. Almost immediately, the bot went off the rails, accosting users, telling people it could think and feel and referring to itself by the alter ego “Sydney.” The company quickly turned down the bot’s creativity, which made it act more reservedly and limited the number of questions people could ask it at once, which Microsoft explained was allowing users to lead the bot in strange directions.
On Thursday, Microsoft outlined plans to make its AI “copilots,” which help people do tasks in Microsoft Word and Excel, front and center in much of its software. Starting next week, computers running the latest Windows software will add a highly visible icon for people to ask Microsoft’s AI for help with tasks such as troubleshooting an audio problem on their computer or summarizing a long online article in Microsoft’s Edge web browser.
Microsoft CEO Satya Nadella at the event compared the past 10 months since ChatGPT exploded into public consciousness to previous technology revolutions including the invention of personal computers, the internet and smartphones.
“It’s kind of like the ’90s are back. It’s exciting to be in a place where we are bringing some software innovation,” he said.
Meanwhile, OpenAI — which provides the underpinnings of that technology — launched the latest version of its image generator, Dall-E 3. Instead of users having to become experts in writing complex prompts to get the image-generator to make detailed images, the company infused its chatbot tech into Dall-E 3 to make it better at grasping regular conversational language and delivering what people are asking for.
But in a live demo for The Post, an executive showed an image of two young people doing business in a steampunk-style city, generated off prompts including that one character should be a grumpy old man. Another image posted to OpenAI’s Twitter account asked Dall-E to show potato kings sitting on potato thrones. It showed tiny smiling potatoes wearing crowns — but no thrones.
If consumers begin using AI tools that don’t work, they may be turned off to the field completely, said Jim Hare, a vice president at tech research firm Gartner. “It may backfire.”
Google, which has adopted a new slogan, “bold and responsible,” to represent its approach on AI, integrated its Bard chatbot with a handful of its other major products, including Gmail, Google Docs, Google Flights and YouTube. Now, users can ask the bot to search through their emails, summarizing them and pulling out the most important points.
But the tool makes a lot of mistakes, including inventing emails that didn’t exist, and suggesting random marketing emails when asked for a summary of urgent and important messages, according to a Post analysis.
Jack Krawczyk, the head of product for Bard, said the tech was still in its early stages and had seen major improvements over the six months since its launch. Google places a label reading “experiment” at the top of the Bard homepage still, and notifies people that it might make mistakes.
The tech companies’ approach of launch first, fix later, comes with real risks, said Teixeira, the Mozilla executive. Chatbots usually present their information in an authoritative style, making it harder for people to know that what they’re being told could be wrong. And the companies aren’t being open enough about how they’re using the data that people are putting in while they interact with the bots.
“There’s certainly not a sufficient level of transparency to tell me what’s happening with my stuff,” he said.
Amazon’s launch of its generative AI conversational chatbot feature for its Alexa home speakers this week came months after its competitors. Amazon senior vice president of devices and services Dave Limp said the new tech made it possible to have a “near-human” conversation with Alexa.
But the company didn’t let journalists try it out themselves, and in an onstage demo, the chatbot punctuated its conversation with Limp with some long, awkward pauses.
“It’s not the endgame,” Limp said in an interview. The bot will keep getting better as time progresses, he said.
Still, Amazon aims to make a version of the chat mode available to all users of Amazon speakers sometime this year. In the United States, more than 70 million people use Alexa monthly, according to Insider Intelligence.
Shira Ovide, Geoffrey A. Fowler, Nitasha Tiku and Christina Passariello contributed to this report.