One year back, we talked about Google’s chip for machine learning. With time, Google has progressed and Google TPU 2.0 is newly announced hardware specially for Google Cloud, which is offering 180 Teraflop For AI Acceleration, that nearly approaching capability of a supercomputer. Google is giving more and more importance to Machine learning to power voice recognition, text translation, search rankings to data centre management. Their previous approaches possibly used to cost a bomb, and the latest design of chip has been designed to address these issues.
Google TPU 2.0 : A Single Cloud TPU Device Is 12,000 Times Faster Than IBM’S Deep Blue Supercomputer
As highlighted by the several participants in the day 1 of the Google I/O (17-19 May 2017), now archived, Google is more focused on hardware, AI and machine learning rather than mobile software. The arrival of the second generation custom chips designed for AI and machine learning in the cloud, known as TPU (Tensor Processing Unit), marks a turning point in the strategy to transform Google into a “First AI company.”
Moreover, machine learning and AI can help the company close the gap with AWS indeed kicking off a new season in which they will in fact all new services/technologies related to artificial intelligence to be the center of attention and, Google hopes, the turnover of more profitable business.
The results obtained from the first version of TPU are important. The second version of TPU considerably increased the computing power made available to users coming up to 180 teraflops. According to what was stated by Google, the origin of the chips dates back to 2011, the year of the Android boom and voice searches. TPU was born to handle the extra workload.
The second generation chip will significantly improve the analysis and search operations carried out on the images, translation operations which were previously ignored in the process of training. At this moment Google intends to “sell” their chips exclusively via cloud.