My Blog
Business

AI doomers are a ‘cult’ — here’s the real threat, says Marc Andreessen

AI doomers are a ‘cult’ — here’s the real threat, says Marc Andreessen
AI doomers are a ‘cult’ — here’s the real threat, says Marc Andreessen


Andreessen Horowitz partner Marc Andreessen

Justin Sullivan | Getty Images

Venture capitalist Marc Andreessen is known for saying that “software is eating the world.” When it comes to artificial intelligence, he claims people should stop worrying and build, build, build.

On Tuesday, Andreessen published a nearly 7,000-word missive on his views on AI, the risks it poses and the regulation he believes it requires. In trying to counteract all the recent talk of “AI doomerism,” he presents what could be seen as an overly idealistic perspective of the implications.

‘Doesn’t want to kill you’

Andreessen starts off with an accurate take on AI, or machine learning, calling it “the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.”

AI isn’t sentient, he says, despite the fact that its ability to mimic human language can understandably fool some into believing otherwise. It’s trained on human language and finds high-level patterns in that data. 

“AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive,” he wrote. “And AI is a machine – is not going to come alive any more than your toaster will.”

Andreessen writes that there’s a “wall of fear-mongering and doomerism” in the AI world right now. Without naming names, he’s likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity. Last week, Microsoft founder Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis and others signed a letter from the Center for AI Safety about “the risk of extinction from AI.”

We're in the early stage of the A.I. hype cycle, says venture capital fund

Tech CEOs are motivated to promote such doomsday views because they “stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition,” Andreessen wrote.  

Many AI researchers and ethicists have also criticized the doomsday narrative. One argument is that too much focus on AI’s growing power and its future threats distracts from real-life harms that some algorithms cause to marginalized communities right now, rather than in an unspecified future.

But that’s where most of the similarities between Andreessen and the researchers end. Andreessen writes that people in roles like AI safety expert, AI ethicist and AI risk researcher “are paid to be doomers, and their statements should be processed appropriately,” he wrote. In actuality, many leaders in the AI research, ethics and trust and safety community have voiced clear opposition to the doomer agenda and instead focus on mitigating today’s documented risks of the technology.

Instead of acknowledging any documented real-life risks of AI – its biases can infect facial recognition systems, bail decisions, criminal justice proceedings, mortgage approval algorithms and more – Andreessen claims AI could be “a way to make everything we care about better.” 

He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates.

“Anything that people do with their natural intelligence today can be done much better with AI,” he wrote. “And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.” 

From doomerism to idealism

Though AI has made significant strides in many areas, such as vaccine development and chatbot services, the technology’s documented harms has led many experts to conclude that, for certain applications, it should never be used.

Andreessen describes these fears as irrational “moral panic.” He also promotes reverting to the tech industry’s “move fast and break things” approach of yesteryear, writing that both big AI companies and startups “should be allowed to build AI as fast and aggressively as they can” and that the tech “will accelerate very quickly from here – if we let it.” 

Andreessen, who gained prominence in the 1990s for developing the first popular internet browser, started his venture firm with Ben Horowitz in 2009. Two years later, he wrote an oft-cited blog post titled “Why software is eating the world,” which said that health care and education were due for “fundamental software-based transformation” just as so many industries before them.

Eating the world is exactly what many people fear when it comes to AI. Beyond just trying to tamp down those concerns, Andreessen says there’s work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms.

“Governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities,” he said.  

In Andreessen’s own idealist future, “every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” He expresses similar visions for AI’s role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander. 

Is China the real threat?

Near the end of his post, Andreessen points out what he calls “the actual risk of not pursuing AI with maximum force and speed.”

That risk, he says, is China, which is developing AI quickly and with highly concerning authoritarian applications.  According to years of documented cases, the Chinese government leans on surveillance AI, such as using facial recognition and phone GPS data to track and identify protesters

To head off the spread of China’s AI influence, Andreessen writes, “We should drive AI into our economy and society as fast and hard as we possibly can.”

He then offers a plan for aggressive AI development on behalf of big tech companies and startups and using the “full power of our private sector, our scientific establishment, and our governments.”

Andreessen writes with a level of certainty about where the world is headed, but he’s not always great at predicting what’s coming.

His firm launched a $2.2 billion crypto fund in mid-2021, shortly before the industry began to crater. And one of its big bets during the pandemic was on social audio startup Clubhouse, which soared to a $4 billion valuation while people were stuck at home looking for alternative forms of entertainment. In April, Clubhouse said it’s laying off half its staff in order to “reset” the company.

Throughout Andreessen’s essay, he calls out the ulterior motives that others have when it comes to publicly expressing their views on AI. But he has his own. He wants to make money on the AI revolution, and is investing in startups with that goal in mind.

“I do not believe they are reckless or villains,” he concluded in his post. “They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.”

WATCH: CNBC’s interview with Altimeter Capital’s Brad Gerstner

Watch CNBC’s full interview with Altimeter Capital founder Brad Gerstner on A.I. risks

Related posts

Boeing says a new 737 Max flaw will slow airplane deliveries

newsconquest

FAA rejects proposal to halve pilots’ flight-time requirement amid shortage

newsconquest

Argentine presidential candidate wants to abolish central bank

newsconquest