All possibly aware that Google is interested about machine learning. Google Developed Hardware for F/OSS Machine Learning Software TensorFlow. It is Possibly Not Astonishing to Hear About Google Custom Chip for TensorFlow Machine Learning.
Google Custom Chip for TensorFlow Machine Learning for Own Need
The development works around artificial intelligence is increasingly and now advanced to pass through one of the toughest challenges for the automobile, that of matter of dealing with the human mind. In October 2015, the IA AlphaGo was able to secure a Go match in practice – a kind of Chinese checkers, surpassing a human trader. Machine Learning, indeed can be used in Data Centers, managing the skilled employees to DDoS. From the recent news, their CEO revealed some details about the hardware used to Mountain View to manage via AI. According to a statement from the well-known face of the company, behind AlphaGo there would be a fully “in house” developed chip by Google and that is not comparable, from performance side, with any CPU or GPU on the market today. The relationship between performance and watts used is strongly in favor of TPU (Tensor Processing Unit), a custom chip for IA that takes inspiration for its name from machine learning framework TensorFlow (at the base of popular services such as Gmail, Photos and Voice Recognition ). TensorFlow is Open Source and available on GitHub :Advertisement
What Google Custom Chip for TensorFlow Machine Learning is Intended to Perform?
Hardware engineer of Google via a blog post added more information on ASIC (Application Specific Integrated Circuit – a chip designed to perform a specific task). They claimed to be running these TPUs inside their data centers for more than a year, and found to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).
Google’s official goal is to lead the industry on machine learning and make that innovation available to their customers. Building TPUs into their infrastructure stack will allow them to bring software like TensorFlow and Cloud Machine Learning in Pay as you Go model.