My Blog
Business

Google reveals its newest A.I. supercomputer, says it beats Nvidia

Google reveals its newest A.I. supercomputer, says it beats Nvidia
Google reveals its newest A.I. supercomputer, says it beats Nvidia


Google headquarters is seen in Mountain View, California, United States on September 26, 2022.

Tayfun Coskun | Anadolu Agency | Getty Images

Google published details about one of its artificial intelligence supercomputers on Wednesday, saying it is faster and more efficient than competing Nvidia systems, as power-hungry machine learning models continue to be the hottest part of the tech industry.

While Nvidia dominates the market for AI model training and deployment, with over 90%, Google has been designing and deploying AI chips called Tensor Processing Units, or TPUs, since 2016.

Google is a major AI pioneer, and its employees have developed some of the most important advancements in the field over the last decade. But some believe it has fallen behind in terms of commercializing its inventions, and internally, the company has been racing to release products and prove it hasn’t squandered its lead, a “code red” situation in the company, CNBC previously reported.

AI models and products such as Google’s Bard or OpenAI’s ChatGPT — powered by Nvidia’s A100 chips —require a lot of computers and hundreds or thousands of chips to work together to train models, with the computers running around the clock for weeks or months.

On Tuesday, Google said that it had built a system with over 4,000 TPUs joined with custom components designed to run and train AI models. It’s been running since 2020, and was used to train Google’s PaLM model, which competes with OpenAI’s GPT model, over 50 days.

Google’s TPU-based supercomputer, called TPU v4, is “1.2x–1.7x faster and uses 1.3x–1.9x less power than the Nvidia A100,” the Google researchers wrote.

“The performance, scalability, and availability make TPU v4 supercomputers the workhorses of large language models,” the researchers continued.

However, Google’s TPU results were not compared with the latest Nvidia AI chip, the H100, because it is more recent and was made with more advanced manufacturing technology, the Google researchers said.

Results and rankings from an industrywide AI chip test called MLperf were released Wednesday, and Nvidia CEO Jensen Huang said the results for the most recent Nvidia chip, the H100, were significantly faster than the previous generation.

“Today’s MLPerf 3.0 highlights Hopper delivering 4x more performance than A100,” Huang wrote in a blog post. “The next level of Generative AI requires new AI infrastructure to train Large Language Models with great energy-efficiency.

The substantial amount of computer power needed for AI is expensive, and many in the industry are focused on developing new chips, components such as optical connections, or software techniques that reduce the amount of computer power needed.

The power requirements of AI are also a boon to cloud providers such as Google, Microsoft and Amazon, which can rent out computer processing by the hour and provide credits or computing time to startups to build relationships. (Google’s cloud also sells time on Nvidia chips.) For example, Google said that Midjourney, an AI image generator, was trained on its TPU chips.

Related posts

Luxury automaker Porsche issues growth outlook after record 2022 earnings

newsconquest

Italy is planning a sovereign fund amid a new era of national interest

newsconquest

The financial lesson Alexa von Tobel wishes she’d learned as a kid

newsconquest