My Blog
Technology

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too


Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies.

The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.

OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023, according to the two people, who discussed sensitive information about the company on the condition of anonymity.

But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the F.B.I. or anyone else in law enforcement.

For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal A.I. technology that — while now mostly a work and research tool — could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of artificial intelligence.

After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Leopold Aschenbrenner, a former OpenAI researcher, alluded to the security breach on a podcast last month and reiterated his worries.Credit…via YouTube

Mr. Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” an OpenAI spokeswoman, Liz Bourgeois, said. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, “While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work.”

Fears that a hack of an American technology company might have links to China are not unreasonable. Last month, Brad Smith, Microsoft’s president, testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a wide-ranging attack on federal government networks.

However, under federal and California law, OpenAI cannot prevent people from working at the company because of their nationality, and policy researchers have said that barring foreign talent from U.S. projects could significantly impede the progress of A.I. in the United States.

“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s head of security, told The New York Times in an interview. “It comes with some risks, and we need to figure those out.”

(The Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems.)

OpenAI is not the only company building increasingly powerful systems using rapidly improving A.I. technology. Some of them — most notably Meta, the owner of Facebook and Instagram — are freely sharing their designs with the rest of the world as open source software. They believe that the dangers posed by today’s A.I. technologies are slim and that sharing code allows engineers and researchers across the industry to identify and fix problems.

Today’s A.I. systems can help spread disinformation online, including text, still images and, increasingly, videos. They are also beginning to take away some jobs.

Companies like OpenAI and its competitors Anthropic and Google add guardrails to their A.I. applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems.

But there is not much evidence that today’s A.I. technologies are a significant national security risk. Studies by OpenAI, Anthropic and others over the past year showed that A.I. was not significantly more dangerous than search engines. Daniela Amodei, an Anthropic co-founder and the company’s president, said its latest A.I. technology would not be a major risk if its designs were stolen or freely shared with others.

“If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is ‘No, probably not,’” she told The Times last month. “Could it accelerate something for a bad actor down the road? Maybe. It is really speculative.”

Still, researchers and tech executives have long worried that A.I. could one day fuel the creation new bioweapons or help break into government computer systems. Some even believe it could destroy humanity.

A number of companies, including OpenAI and Anthropic, are already locking down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He has also been appointed to the OpenAI board of directors.

“We started investing in security years before ChatGPT,” Mr. Knight said. “We’re on a journey not only to understand the risks and stay ahead of them, but also to deepen our resilience.”

Federal officials and state lawmakers are also pushing toward government regulations that would bar companies from releasing certain A.I. technologies and fine them millions if their technologies caused harm. But experts say these dangers are still years or even decades away.

Chinese companies are building systems of their own that are nearly as powerful as the leading U.S. systems. By some metrics, China eclipsed the United States as the biggest producer of A.I. talent, with the country generating almost half the world’s top A.I. researchers.

“It is not crazy to think that China will soon be ahead of the U.S.,” said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world’s open source A.I. projects.

Some researchers and national security leaders argue that the mathematical algorithms at the heart of current A.I. systems, while not dangerous today, could become dangerous and are calling for tighter controls on A.I. labs.

“Even if the worst-case scenarios are relatively low probability, if they are high impact then it is our responsibility to take them seriously,” Susan Rice, former domestic policy adviser to President Biden and former national security adviser for President Barack Obama, said during an event in Silicon Valley last month. “I do not think it is science fiction, as many like to claim.”

Related posts

Peloton’s long run is unsure after a swift fall from pandemic stardom.

newsconquest

Samsung’s Upgraded 256GB Galaxy Tab S9 Models Hit New Lows, Drop Below 128GB Prices

newsconquest

Trump Media paid out millions to its executives. Here’s who got what.

newsconquest