Science-advisor
REGISTER info/FAQ
Login
username
password
     
forgot password?
register here
 
Research articles
  search articles
  reviews guidelines
  reviews
  articles index
My Pages
my alerts
  my messages
  my reviews
  my favorites
 
 
Stat
Members: 3645
Articles: 2'504'585
Articles rated: 2609

24 April 2024
 
  » arxiv » 1704.4760

 Article overview



In-Datacenter Performance Analysis of a Tensor Processing Unit
Norman P. Jouppi ; Cliff Young ; Nishant Patil ; David Patterson ; Gaurav Agrawal ; Raminder Bajwa ; Sarah Bates ; Suresh Bhatia ; Nan Boden ; Al Borchers ; Rick Boyle ; Pierre-luc Cantin ; Clifford Chao ; Chris Clark ; Jeremy Coriell ; Mike Daley ; Matt Dau ; Jeffrey Dean ; Ben Gelb ; Tara Vazir Ghaemmaghami ; Rajendra Gottipati ; William Gulland ; Robert Hagmann ; C. Richard Ho ; Doug Hogberg ; John Hu ; Robert Hundt ; Dan Hurt ; Julian Ibarz ; Aaron Jaffey ; Alek Jaworski ; Alexander Kaplan ; Harshit Khaitan ; Andy Koch ; Naveen Kumar ; Steve Lacy ; James Laudon ; James Law ; Diemthu Le ; Chris Leary ; Zhuyuan Liu ; Kyle Lucke ; Alan Lundin ; Gordon MacKean ; Adriana Maggiore ; Maire Mahony ; Kieran Miller ; Rahul Nagarajan ; Ravi Narayanaswami ; Ray Ni ; Kathy Nix ; Thomas Norrie ; Mark Omernick ; Narayana Penukonda ; Andy Phelps ; Jonathan Ross ; Matt Ross ; Amir Salek ; Emad Samadiani ; Chris Severn ; Gregory Sizikov ; Matthew Snelham ; Jed Souter ; Dan Steinberg ; Andy Swing ; Mercedes Tan ; Gregory Thorson ; Bo Tian ; Horia Toma ; Erick Tuttle ; Vijay Vasudevan ; Richard Walter ; Walter Wang ; Eric Wilcox ; Doe Hyun Yoon ;
Date 16 Apr 2017
AbstractMany architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU’s deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters’ NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPU’s GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.
Source arXiv, 1704.4760
Services Forum | Review | PDF | Favorites   
 
Visitor rating: did you like this article? no 1   2   3   4   5   yes

No review found.
 Did you like this article?

This article or document is ...
important:
of broad interest:
readable:
new:
correct:
Global appreciation:

  Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.

browser Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)






ScienXe.org
» my Online CV
» Free


News, job offers and information for researchers and scientists:
home  |  contact  |  terms of use  |  sitemap
Copyright © 2005-2024 - Scimetrica