My Blog
Technology

OpenAI CEO Sam Altman unleashed ChatGPT. Silicon Valley wasn’t ready.

OpenAI CEO Sam Altman unleashed ChatGPT. Silicon Valley wasn’t ready.
OpenAI CEO Sam Altman unleashed ChatGPT. Silicon Valley wasn’t ready.



Sam Altman made the decision that set Silicon Valley on fire all by himself.

Engineers at OpenAI, the artificial intelligence company of which Altman is chief executive, had built a series of powerful AI tools that could generate complex texts and write computer code. But they weren’t sure about releasing it for public use as a chatbot fearing it wouldn’t resonate and wasn’t ready for prime time.

Decisions at the company are usually made by consensus: employees debate, experts are consulted and eventually a joint conclusion is reached. But when the question of the chatbot’s release cycled up to Altman he said he made a rare “contentious unilateral decision.”

“Doing ChatGPT was something that I pushed for that other people at the time didn’t really want to do,” he said. Employees asked, ”’Is the model good enough? Are people going to use it? Does anyone want to chat?’”

People did, it turns out. ChatGPT launched in November, and now millions are using it and similar tools from other companies, a development that has reinvigorated Silicon Valley and set off an arms race to control a technology that industry insiders predict will be as transformative as the invention of the internet itself. Generative AI tools could completely change the way people find and synthesize information, replace or disrupt hundreds of millions of jobs and further cement the power Big Tech companies wield over society.

At the center of that race is Altman’s OpenAI, which has become the dominant player in the space by first launching its technology to the public while bigger rivals dither on the sidelines. Microsoft has spent billions to get OpenAI’s tech into its own products, helping it beat rival Google earlier this year to market with a chatbot.

For Altman, 37, the rise of OpenAI and the explosion of interest in ChatGPT has catapulted him from his role as a prolific investor and protege of more powerful men, to a central player among the most powerful people in tech. It’s also made him a key voice in the heated and globe-spanning debate over AI, what it’s capable of and who should control it.

“He’s an unbelievable entrepreneur,” said Microsoft CEO Satya Nadella. “He has this ability to bet big and be right on multiple fronts.”

It’s a strange position to be in. Altman is one of the driving forces pushing AI tech forward and into the public’s hands, while also being vocal about the potential dangers of the technology, like the risk of AI displacing human jobs or rogue actors using it to supercharge misinformation campaigns.

While Altman says he isn’t sure he’s naturally suited to be a CEO, he does believe he’s the right person to shepherd the development of a technology that he argues will have world-changing consequences.

“You do whatever it takes, even if it’s not your first choice,” Altman said.

As part of that job, he’s planned a round-the-world goodwill tour to talk with politicians and people using OpenAI’s technology. The month-long campaign — which will take him to Canada, Brazil, Nigeria, Europe, Singapore, Japan, Indonesia and Australia, among other stops — comes as debate over AI’s impact on the world is heating up. Regulators in multiple countries are scrutinizing OpenAI’s technology, asking questions about everything from copyright infringement to the risk of new and more sinister forms of misinformation. The Italian government temporarily banned OpenAI in March, citing concerns about privacy and data collection.

Altman has long said public use of the technology presents potential dangers. But he argues that OpenAI is the right steward, a company able to strike a balance between releasing the technology for public testing and keeping enough details secret to prevent the AI from being used for hacking or misinformation campaigns.

A growing faction of technologists is begging Altman to slow down, arguing that the technology could rapidly become smarter than people and begin to oppress humanity. Skeptics say such fantastic claims distract from the more concrete problems AI is already creating, such as the propagation of sexist and racist biases.

Altman insists the company’s ultimate goal is to benefit all of humanity. But he has many naysayers, and some say he is endangering the world by launching untested technology. AI ethicists have warned of the decision to put the technology into the public’s hands.

Altman says he wants more government regulation, but for now that regulation doesn’t exist. So he’s forging on, believing that the path he’s set is the best one.

“It all comes down to what they think ‘benefiting all of humanity’ means,” said Alberto Romero, an analyst at AI research firm CambridgeAI who writes a newsletter about the industry. “Not everyone agrees with OpenAI’s definition.”

Many, too, have criticized Altman’s management of OpenAI. Since taking over in 2019, he’s played a major role in changing the company’s mission from a nonprofit meant to provide a counterweight to Big Tech companies. Under Altman, the company became free to take on investors and make money, though its board of directors is still part of a nonprofit that technically controls the company. And OpenAI also has reduced the amount of information it openly publishes, such as the types of data that go into training its AI models.

Perhaps most notably, OpenAI has signed a series of deals with Microsoft, which used underlying ChatGPT technology to launch its Bing chatbot. In exchange, OpenAI gets access to Microsoft’s cloud — huge data servers that give it the computing power to train and run AI programs.

Despite the contradictions, Altman said his company’s approach to getting the technology into the public’s hands while it is still early and imperfect will help society prepare for whatever changes AI brings. He argues that being scared about the technology’s potential makes him cognizant of its risks.

“People talk about AI as a technological revolution. It’s even bigger than that,” Altman said. “It’s going to be this whole thing that touches all aspects of society.”

Altman hasn’t always been one of the most powerful men in tech. By his own admission, his first start-up, Loopt, was a bust.

Born in St. Louis, Altman got into coding as a kid and enrolled at Stanford to study computer science. He dropped out in 2005 to start Loopt, which offered a mobile app that was supposed to help people locate their friends. It resembled Apple’s Find My Friends feature, but came out before fullscreen smartphones were commonplace. After seven years, he sold it to a payment processing company, which took the technology but shut Loopt down.

“I failed pretty hard at my first start-up–it sucked!,” Altman said in a February tweet. “Am doing pretty well on my second.”

Altman speaks in measured, steady sentences, sometimes taking long pauses to formulate his answers. He wears a nondescript Silicon Valley uniform — single-color t-shirts, hoodies, sneakers. And he wants you to know he’s open to feedback.

“Feel free, truly, to ask anything. I won’t be offended,” he said mid-interview.

Loopt was in the first cohort of the tech start-up program Y Combinator started by a group of four well-known Silicon Valley entrepreneurs that helped founders get off the ground. Y Combinator grew, becoming one of the best-known programs of its type, and helping launch companies including Airbnb, DoorDash, Reddit and Stripe. Altman returned in 2011 as a partner, and in 2014 founder Paul Graham named him the new president.

“He’s one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent,” Graham said in a blog post announcing the change.

Altman quickly moved to expand the company’s horizons. He invested in more companies per year, and raised a new $700 million fund to invest more in companies that had already graduated from the program. He also started a research lab for Y Combinator that would focus on bigger, fundamental questions in science, tech and society. That included developing a project looking at Universal Basic Income, an idea that is popular among those who think AI will take over millions of jobs, leaving people without work and dependent on government payments.

But the first project it launched was OpenAI.

Instead of investors, it had donors, including Twitter and Tesla CEO Elon Musk, Palantir co-founder and conservative donor Peter Thiel, and Altman himself. When it was founded in 2015, Google, Amazon, Microsoft and Facebook had been busy hiring the world’s best AI researchers, putting them to work making sure Big Tech would be in the driver’s seat of AI development. The universities that had funded AI research for decades couldn’t compete.

OpenAI planned to do things differently, running as a nonprofit and encouraging its researchers to share their code and patents with the world so that the potentially revolutionary technology wouldn’t be monopolized by the tech giants. In a blog post announcing its formation, OpenAI’s founders said its purpose was to “benefit humanity as a whole, unconstrained by a need to generate financial return.”

When LinkedIn founder Reid Hoffman agreed to join OpenAI’s board, Altman suggested he come meet the company’s employees. He interviewed Hoffman in front of the team, grilling him on what he would do if Altman failed as CEO.

“‘Well, I’d work with you,” Hoffman said. Altman pressed him, “And what if I continued to not do it well?” “We’d fire you,” Hoffman finally responded.

Altman had made his point: He was not an autocrat.

In the same way he pushed Y Combinator to change and grow, Altman did the same at OpenAI. In 2019, he left the incubator to focus full-time on OpenAI. That was the year the company launched GPT2, and when Altman saw what the technology was capable of he realized it was time to double down on it.

“This is really going to take over my whole life,” he recalls thinking at the time. The same year, OpenAI stopped being a nonprofit and did its first big deal with Microsoft so it could use the company’s warehouses full of computer servers to allow it to train bigger and more energy-intensive AI programs.

As the tech progressed, Altman realized he needed help. Training the large language models that are the backbone of OpenAI’s technology on trillions of words scoured from the internet requires an immense amount of computing power. To compete with the big companies who already owned their own server farms, OpenAI needed hard cash, and to get it they needed to offer investors the potential for big returns.

Shortly after Altman took over, the company dropped its nonprofit status and switched to what it called a “capped profit” structure. Investors are entitled to earn 100 times their investment, but everything over that flows to the company’s nonprofit arm.

AI researchers and some of OpenAI’s own employees were leery of the company’s transition. Months before the switch, it announced a new language model called GPT2 trained on 10 times as much data as the company’s previous version. The company showed off the software’s ability to generate full news stories about fictitious events based on one-sentence prompts, but didn’t release it publicly, citing the risk of people using it for malicious purposes.

Today, in contrast to its original mission, the company releases few details about what goes into the most powerful versions of its AI models, the most recent of which is GPT4. The heavyweight tech billionaires attached to the project in its early days — Musk, Thiel and most recently Hoffman — have stepped aside, leaving Altman as the dominant personality running and representing OpenAI.

The company has already defied the natural laws of business. After OpenAI cut a deal with Microsoft, it sold the same underlying technology to Microsoft’s direct competitors, including Salesforce.

Musk has tweeted his frustration about donating money to a nonprofit that has suddenly become a for-profit enterprise. “I’m still confused as to how a nonprofit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” Musk said. (He is working on starting his own AI company, according to the Information.)

In late March, venture capitalists and AI company founders gathered at the Cerebral Valley AI conference, a select event for AI industry insiders. While some attendees expressed fear that AI will soon take over the world, most were focused on finding ways to quickly build businesses around a technological shift viewed as the biggest thing since the advent of mobile phones.

Several panelists called the OpenAI-Microsoft partnership the dominant force in the industry, and some said it would make them hesitant to partner with OpenAI going forward.

“Their deal with Microsoft just gives Microsoft an advantage,” Amjad Masad, CEO of Replit, a coding collaboration platform that features an AI to help with programming tasks. His company originally used OpenAI’s technology, but switched to building its own after coming to the conclusion they needed more independence.

“If you want to compete with Microsoft, you can’t use OpenAI,” he said.

Altman said the company is still much more transparent than its Big Tech competitors. It published 18 papers last year alone, detailing the company’s findings on topics from new techniques to train large language models, to the potential economic impact of chatbots that can write computer code. Letting anyone have access to its technology would cause bigger problems, making it easy for bad actors to use generative AI for cyberattacks and misinformation, Altman said.

Still, OpenAI could have done a lot to benefit people and stay true to its original mission without trying to compete directly with the Big Tech players, Romero, the AI analyst said. It may have been less ambitious.

“I think OpenAI execs were honest about their original vision and mission: Creating a nonprofit, focused on open source and free from financial ties,” he said. “However, when they found out they needed more money to continue the path they deemed most promising, they didn’t hesitate to change their principles.”



Related posts

These Wordle Starter Words Practically Guarantee a Winning Streak

newsconquest

E.U. courtroom throws out $1.2 billion antitrust high-quality in opposition to Intel.

newsconquest

Google Sheds Hundreds of Recruiters in Another Round of Layoffs

newsconquest