Finance

Google introduced training and TPU guidance in the recent shooting at Nvidia

After years of manufacturing chips that can train artificial intelligence models and handle the task of identification, Google it splits those tasks into separate processors, its latest effort to do so Nvidia on AI hardware.

Google said Wednesday it is making changes to the eighth generation of its tensor processing unit, or TPU. Both chips will be available later this year.

“With the growth of AI agents, we decided that the public would benefit from specialized chips for training and operational needs,” said Amin Vahdat, Google’s senior vice president and chief expert on AI and infrastructure, in a blog post.

In March, Nvidia talked about the upcoming silicon that can make models respond quickly to user questions, thanks to the technology obtained from its 20 billion management with chip startup Groq. Google is Nvidia’s biggest customer, but it offers TPUs as an alternative for companies using its cloud services.

Many of the world’s leading technology companies are pursuing custom semiconductor development to gain artificial intelligence to increase efficiency and to build special use cases. an apple has incorporated parts of the neural AI engine into its chips inside the iPhone for years. Microsoft announced a second-generation AI chip in January. Last week, Meta he said it works too Broadcom developing multiple versions of AI processors.

Google was early on this trend. In 2015, the company began using processors it designed to run AI models, and began leasing them to cloud customers in 2018. Amazon Web Services announced the Inferentia chip for handling AI applications in 2018, and unveiled the Trainium processor for training AI models in 2020.

DA Davidson analysts estimated in September that the TPU business, combined with the Google DeepMind AI group, could be worth about 900 billion.

None of the tech giants are displacing Nvidia, and Google is not comparing the performance of its new chips with those from the AI ​​chip leader. Google said that the training chip enables 2.8 times the performance of the seventh generation Ironwood TPU, which was announced in November, at the same price, and the performance is 80% better in the index processor.

Nvidia said its upcoming Groq 3 LPU hardware will draw on large amounts of random access memory, or SRAM, used by Cerebras, the AI ​​chipmaker that filed for public access earlier this month. Google’s new chip, called TPU 8i, also relies on SRAM. Each chip contains 384 megabytes of SRAM, three times the amount in Ironwood.

The architecture is designed to “deliver the high capacity and low latency needed to simultaneously operate millions of agents efficiently,” Sundar Pichai, CEO of Google parent Alphabet, wrote in a blog post.

Adoption of Google’s AI chips is growing. Citadel Securities is building quantitative research software that draws on Google’s TPUs, and all 17 of the US Department of Energy’s national laboratories use AI software for scientists built into the chips, Google said. Anthropic has committed to using gigawatts worth of Google TPUs.

WATCH: Broadcom agrees to extended chip deal with Google, Anthropic

Broadcom agrees to extended chip deal with Google, Anthropic
Choose CNBC as your preferred source on Google and never miss the most trusted name in business news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button