Forget the CPU, GPU, and FPGA, Google says its Tensor Processing Unit, or TPU, advances machine learning capability by a factor of three generations.
TPUs have been a closely guarded secret of Google, but Pichai said the chips powered the AlphaGo computer that beat Lee Sedol, the world champion in the incredibly complicated game called Go. Pichai didn’t go into details of the Tensor Processing Unit but the company did disclose a little more information in a blog posted on the same day as Pichai’s revelation.
“We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future,” the blog said.
“TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.”
SGI had a commercial product called a Tensor Processing Unit in its workstations in the early 2000s that appears to have been a Digital Signal Processor, or DSP. A DSP is a dedicated chip that does a repetitive, simple task extremely quickly and efficiently.
As to Google’s claim that the TPU’s performance is akin to accelerating Moore’s Law by seven years, he doesn’t doubt it.
A CPU without dedicated circuits would consume far more power than the ASIC at that job.