My Blog
Technology

Would Large Language Models Be Better If They Weren’t So Large?

Would Large Language Models Be Better If They Weren’t So Large?
Would Large Language Models Be Better If They Weren’t So Large?


For instance, nativism, an influential theory tracing back to Noam Chomsky’s early work, claims that humans learn language quickly and efficiently because ‌they have an innate understanding of how language works. But language models learn language quickly, too, and seemingly without an innate understanding of how language works — so maybe nativism doesn’t hold water.

The challenge is that language models learn very differently from humans. Humans have bodies, social lives and rich sensations. We can smell mulch, feel the vanes of feathers, bump into doors and taste peppermints. Early on, we are exposed to simple spoken words and syntaxes that are often not represented in writing. So, Dr. Wilcox concluded, a computer that produces language after being trained on gazillions of written words can tell us only so much about our own linguistic process.

But if a language model were exposed only to words that a young human encounters, it might interact with language in ways that could address certain questions we have about our own abilities.

So, together with a half-dozen ‌colleagues, Dr. Wilcox, Mr. Mueller and Dr. Warstadt conceived of the BabyLM Challenge, to try to nudge language models slightly closer to human understanding. In January, they sent out a call for teams to train language models on the same number of words that a 13‌-year-old human ‌encounters — roughly 100 million. Candidate models would be ‌tested on how well they ‌generated and picked up the nuances of language, and a winner would be declared.

Eva Portelance, a linguist at McGill University, came across the challenge the day it was announced. Her research straddles the often blurry line between computer science and linguistics. The first forays into A.I., in the 1950s, were driven by the desire to model human cognitive capacities in computers; the basic unit of information processing in A.I. is ‌the‌ ‌ “neuron‌,” and early language models in the 1980s and ’90s were directly inspired by the human brain. ‌

But as processors grew more powerful, and companies started working toward marketable products, ‌computer scientists realized that it was often easier to train language models on enormous amounts of data than to force them into psychologically informed structures. As a result, Dr. Portelance said, “‌they give us text that’s humanlike, but there’s no connection between us and how they function‌.”‌

For scientists interested in understanding how the human mind works, these large models offer limited insight. And because they require ‌tremendous processing power, few researchers can access them. “Only a small number of industry labs with huge resources can afford to train models with billions of parameters on trillions of words,” ‌Dr. Wilcox said.

Related posts

Best Hair Growth Products for 2022

newsconquest

New Tech From MWC 2024 That You Can Actually Buy Today

newsconquest

8 iPhone Battery Hacks That’ll Make Your Device Last Longer

newsconquest