Google Unveils Details Regarding One Of Its AI Supercomputers While Raving About Its Speed And Efficiency

Google has recently published more details regarding its AI supercomputers.

The tech giant could be seen raving about how it’s much more faster and efficient in terms of its arch-rival Nvidia Systems. The news comes as the tech industry gets flooded with the hottest learning models out there today.

As far as the AI market is concerned, Nvidia is definitely at the top of the list and has been dominating for a while now in terms of excellent training for AI models and allowing them to be deployed for use. Remember, more than 90% of the market is filled with this system.

However, let’s not forget how the tech giant has been on the rise to design as well as deploy AI chips that are dubbed Tensor Processing Units. We’ve seen these take place since 2016.

Truly, Google is a market leader and a huge pioneer in the AI world. And so many of the firm’s workforce has created the best advancements in this field in the past ten years. But at the same time, some people feel it tends to fall behind when it comes down to making such inventions available for commercial use.

Similarly, we’ve seen how internally, the firm is racing to set out products and prove that it is still not keen on losing its lead in the industry, as revealed by its code red alert.

In case you did not know, so many of the company’s products and AI models including Google’s Bard are operating due to the A100 chips provided by Nvidia. The same goes for OpenAI popular AI tool ChatGPT.

Such inventions really do need a lot of computers and even more chips to work as one so as to enable the training of models while the computers tend to run around for weeks and some even months.

Recently, Google unveiled some interesting findings including how its system managed to produce 4000 TPUs and those were combined with customized parts to run as well as train models from the AI industry. It’s been up for grabs and running for the past three years, it added.

Similarly, it could be seen training the company’s PaLM model that competes with GPT technology in around a 50-day time period. Moreover, the company’s researchers even jotted down statistics that prove its great speed and efficiency when compared to Nvidia.

From performance to availability, Google’s supercomputers are in a league of their own and it wouldn’t be wrong to call these workhorses huge language models. With that said, we didn’t see it getting compared to the latest invention by Nvidia which is its AI chip called H100.

This product is very new and has been designed using the most advanced technology out there today.

These things don’t come cheap and with so many leading market players venturing into this field, it’s quite evident that Google needs to be at the top of its game and that’s why it’s doing just that.


Read next: These Insights Reveal What Questions People Are Asking Google
Previous Post Next Post