My Blog
Technology

Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons
Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons


Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.

Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.

Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.

The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.

“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.

The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.

This DNA manufacturing equipment is ultimately what allows for the development of bioweapons, said David Baker, the director of the Institute for Protein Design at the University of Washington, who helped shepherd the agreement.

“Protein design is just the first step in making synthetic proteins,” he said in an interview. “You then have to actually synthesize DNA and move the design from the computer into the real world — and that is the appropriate place to regulate.”

The agreement is one of many efforts to weigh the risks of A.I. against the possible benefits. As some experts warn that A.I. technologies can help spread disinformation, replace jobs at an unusual rate and perhaps even destroy humanity, tech companies, academic labs, regulators and lawmakers are struggling to understand these risks and find ways of addressing them.

Dr. Amodei’s company, Anthropic, builds large language models, or L.L.M.s, the new kind of technology that drives online chatbots. When he testified before Congress, he argued that the technology could soon help attackers build new bioweapons.

But he acknowledged that this was not possible today. Anthropic had recently conducted a detailed study showing that if someone were trying to acquire or design biological weapons, L.L.M.s were marginally more useful than an ordinary internet search engine.

Dr. Amodei and others worry that as companies improve L.L.M.s and combine them with other technologies, a serious threat will arise. He told Congress that this was only two to three years away.

OpenAI, maker of the ChatGPT online chatbot, later ran a similar study that showed L.L.M.s were not significantly more dangerous than search engines. Aleksander Mądry, a professor of computer science at the Massachusetts Institute of Technology and OpenAI’s head of preparedness, said that he expected researchers would continue to improve these systems, but that he had not seen any evidence yet that they would be able to create new bioweapons.

Today’s L.L.M.s are created by analyzing enormous amounts of digital text culled from across the internet. This means that they regurgitate or recombine what is already available online, including existing information on biological attacks. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement during this process.)

But in an effort to speed the development of new medicines, vaccines and other useful biological materials, researchers are beginning to build similar A.I. systems that can generate new protein designs. Biologists say such technology could also help attackers design biological weapons, but they point out that actually building the weapons would require a multimillion-dollar laboratory, including DNA manufacturing equipment.

“There is some risk that does not require millions of dollars in infrastructure, but those risks have been around for a while and are not related to A.I.,” said Andrew White, a co-founder of the nonprofit Future House and one of the biologists who signed the agreement.

The biologists called for the development of security measures that would prevent DNA manufacturing equipment from being used with harmful materials — though it is unclear how those measures would work. They also called for safety and security reviews of new A.I. models before releasing them.

They did not argue that the technologies should be bottled up.

“These technologies should not be held only by a small number of people or organizations,” said Rama Ranganathan, a professor of biochemistry and molecular biology at the University of Chicago, who also signed the agreement. “The community of scientists should be able to freely explore them and contribute to them.”

Related posts

Score This Travelhouse 3-Piece Hardside Luggage Set for a Massive Discount of $280 at Walmart

newsconquest

File Your Taxes On Time: These Tax Software Deals Will Help You Make the Upcoming Deadline

newsconquest

Beyond Driverless Trucks: Building Infrastructure for Autonomous EV Systems

newsconquest