News

As Per Google: It’s AI Supercomputer Is Faster And Greener Than The Nvidia A100 Chip

Written by Senoria Khursheed ·  1 min read >

On Tuesday, Alphabet Inc’s Google said that its supercomputers are faster and greener than the Nvidia A100 chip.
Google shared new details about the supercomputers it uses to train artificial intelligence models. Google says the system is more power-efficient and faster than comparable systems from Nvidia Corp.

However, Google always wanted to take the lead in the tech sector; therefore, it has produced its custom chip called the Tensor Processing Unit or TPU..

Google

According to Google, the company is using the chips for more than 90% company’s work on artificial intelligence training.

The chip feeds data through models to make them practical at tasks such as responding to queries with human-like text or generating images.

In addition, Google’s TPU is in its fourth generation. On Tuesday, Google published a scientific public detailing how it has strung more than 4,000 of the chips.

It used its custom-developed optical switches to get individual machines in one place.
As the large language models that drive products such as Google Bard or Open AI’s chatGPT have grown exponentially in size. They are now quite large to fit on a single chip.
Improving these connections has emerged as a critical area of competition among businesses that creates AI supercomputers.

Furthermore, these models are split across thousands of chips and work together for weeks or more to train the model.

PaLM, Google’s most significant publicly disclosed language model to date, was trained by splitting it across two of the 4,000 chip supercomputers over 50 days.

According to Google, “Its supercomputers make it easy to reconfigure connections between chips on the fly”.
Norm Jouppi, a Google Fellow and David Patterson, Google Distinguished Engineer, wrote in a blog post about the system that “circuit switching makes it easy to route around failed components”.

In addition, they also said that “this flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of machine learning (ML) model”.

Google is very confident about its chip and only releases details about its supercomputer. At the same time, it has been online inside the company since 2020 in a data centre in Mayes County, Oklahoma.

The company says that the startup MidJourney used the system to train its model, which generates fresh images after being fed a few words of text.

Moreover, the company has revealed that it’s chips are up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia’s A100 chip.

The chips were in the market simultaneously as the fourth-generation TPU. On the other hand, Nvidia’s spokesperson declined to comment.

In contrast, Google says it’s not comparing its chip with Nvidia’s H100 chip because it is more advanced and built on high technology. We are just comparing it with A100.

Google hinted that they are working on a new TPU that would compete with the Nvidia H100. According to Jouppi, “Google has a healthy pipeline of future chips”.

Read more:

Google Workers in London Stage Walkout Over Massive Layoffs

Google Launches ‘Reader Mode’ for People With Dyslexia and ADHD