My Blog
Technology

Here’s how the military might use AI LLMs like ChatGPT


Pentagon officials were hanging on to every word as Matthew Knight, OpenAI’s head of security, explained how the latest version of ChatGPT had succeeded in deciphering cryptic conversations within a Russian hacking group, a task that human analysts had found challenging.

“These logs were in Russian shorthand internet slang,” Knight said. “We had a Russian linguist on our team who had trouble getting through it. You know, a strong Russian speaker. But GPT-4 was able to get through it.”

The promises and the perils of advanced artificial-intelligence technologies were on display this week at a Pentagon-organized conclave to examine the future uses of artificial intelligence by the military. Government and industry officials discussed how tools like large language models, or LLMs, could be used to help maintain the U.S. government’s strategic lead over rivals — especially China.

In addition to OpenAI, Amazon and Microsoft were among the companies demonstrating their technologies.

Not all of the issues raised were positive. Some speakers urged caution in deploying systems researchers are still working to fully understand.

“There is a looming concern over potential catastrophic accidents due to AI malfunction, and risk of substantial damage from adversarial attack targeting AI,” South Korean Army Lt. Col. Kangmin Kim said at the symposium. “Therefore, it is of paramount importance that we meticulously evaluate AI weapon systems from the developmental stage.”

He told Pentagon officials that they needed to address the issue of “accountability in the event of accidents.”

Craig Martell, head of the Pentagon’s Chief Digital and Artificial Intelligence Office, or CDAO, told reporters Thursday that he’s aware of such concerns.

“I would say we’re cranking too fast if we ship things that we don’t know how to evaluate,” he said. “I don’t think we should ship things that we don’t know how to evaluate.”

Though LLMs like ChatGPT are known to the public as chatbots, industry experts say chatting is not likely to be how the military would use them. They’re more likely to be used to complete tasks that would take too long or be too complicated if done by human beings. That means they’d probably be wielded by trained practitioners using them to harness powerful computers.

“Chat is a dead end,” said Shyam Sankar, chief technology officer of Palantir Technologies, a Pentagon contractor. “Instead, we reimagine LLMs and the prompts as being for developers, not for the end users. … It changes what you would even use them for.”

Looming in the symposium’s background was the United States’ technological race against China, which has growing echoes of the Cold War. The United States remains solidly in the lead on AI, researchers said, with Washington having hobbled Beijing’s progress through a series of sanctions. But U.S. officials worry that China may already have reached sufficient AI proficiency to boost its intelligence-gathering and military capabilities.

Pentagon leaders were reluctant to discuss China’s AI level when asked several times by members of the audience this week, but some of the industry experts invited to speak were willing to take a swing at the question.

Alexandr Wang, CEO of San Francisco-based Scale AI, which is working with the Pentagon on AI, said Thursday that China was far behind the United States in LLMs just a few years ago, but had closed much of that gap through billions of dollars in investments. He said the United States looks poised to stay in the lead, unless it made unforced errors like failing to invest enough in AI applications or deploying LLMs in the wrong scenarios.

“This is an area where we, the United States, should win,” Wang said. “If we try to utilize the technology in scenarios where it’s not fit to be used, then we’re going to fall down. We’re going to shoot ourselves in the foot.”

Some researchers warned against the temptation to push emerging AI applications into the world before they were ready, merely out of concern of China catching up.

“What we see are worries about being or falling behind. This is the same dynamic that animated the development of nuclear weapons and later the hydrogen bomb,” said Jon Wolfsthal, director of global risk at the Federation of American Scientists, who did not attend the symposium. “Maybe these dynamics are unavoidable, but we are not — either in government or within the AI development community — sensitized enough to these risks nor factoring them into decisions about how far to integrate these new capabilities into some of our most sensitive systems.”

Rachael Martin, director of the Pentagon’s Maven program, which analyzes drone surveillance video, high-resolution satellite images and other visual information, said that experts in her program were looking to LLMs for help sifting through “millions to billions” of units of video and photo — “a scale that I think is probably unprecedented in the public sector.” The Maven program is run by the National Geospatial-Intelligence Agency and CDAO.

Martin said it remained unclear whether commercial LLMs, which were trained on public internet data, would be the best fit for Maven’s work.

“There is a vast difference between pictures of cats on the internet and satellite imagery,” she said. “We are unsure how much models that have been trained on those kinds of internet images will be useful for us.”

Interest was particularly high in Knight’s presentation about ChatGPT. OpenAI removed restrictions against military applications from its usage policy last month, and the company has begun working with the U.S. Defense Department’s Defense Advanced Research Projects Agency, or DARPA.

Knight said LLMs were well-suited for conducting sophisticated research across languages, identifying vulnerabilities in source code, and performing needle-in-a-haystack searches that were too laborious for humans. “Language models don’t get fatigued,” he said. “They could do this all day.”

Knight also said LLMs could be useful for “disinformation action” by generating sock puppets, or fake social media accounts, filled with “sort of a baseball card bio of a person.” He noted this is a time-consuming task when done by humans.

“Once you have sock puppets, you can simulate them getting into arguments,” Knight said, showing a mock-up of phantom right-wing and left-wing individuals having a debate.

U.S. Navy Capt. M. Xavier Lugo, head of the CDAO’s generative AI task force, said onstage that the Pentagon would not use a company’s LLM against its wishes.

“If someone doesn’t want their foundational model to be utilized by DoD, then it won’t,” said Lugo.

The office chairing this week’s symposium, CDAO, was formed in June 2022 when the Pentagon merged four data analytics and AI-related units. Margaret Palmieri, deputy chief at CDAO, said the centralization of AI resources into a single office reflected the Pentagon’s interest in not only experimenting with these technologies but deploying them broadly.

“We are looking at the mission through a different lens, and that lens is scale,” she said.

Related posts

Electric Vehicle Tax Credit Rules Create ‘Chaos for Consumers’

newsconquest

Microsoft’s new Bing A.I. chatbot, ‘Sydney’, is acting unhinged

newsconquest

Saturn’s ‘Loss of life Megastar moon’ Mimas is also hiding an inside ocean

newsconquest

Leave a Comment