hamburger-nav-icon
region-flag-icon
Search by Category
Audio
Cameras
Cases & Bags
Computers
Conferencing
Content Management
Control
Displays
Furniture
Lighting & Studio
Mounts & Rigging
Networking & Cabling
Power
Presentation
Production
Residential
Security & Safety
Signal Management
Search by Manufacturer
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
#
Search by Category
NVIDIA - Tesla P100
Request
Quote
Find
A Dealer

16GB High-performance Computing Accelerator Card

Model: Tesla P100

  • 5.3 TeraFLOPS double-precision performance with NVIDIA GPU Boost™
  • 10.6 TeraFLOPS single-precision performance with NVIDIA GPU Boost
  • 21.2 TeraFLOPS half-precision performance with NVIDIA GPU Boost
  • 160 GB/s bidirectional interconnect bandwidth with NVIDIA NVLink
  • 720 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory
Project List
Product Info
Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Even in its early stages, deep learning is having a tremendous impact and is sweeping across every industry. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's large-scale data center relies on many interconnected commodity compute nodes, limiting the performance needed to drive these important workloads. Now, more than ever, the data center must prepare for the high-performance computing and hyper scale workloads being thrust upon it.
The NVIDIA® Tesla® P100 is purpose-built as the most advanced data center accelerator ever. It taps into an innovative new GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity compute nodes. Lightning-fast nodes powered by Tesla P100 accelerate time-to-solution for the world's most important challenges that have infinite compute needs in HPC and deep learning.
  • 5.3 TeraFLOPS double-precision performance with NVIDIA GPU Boost™
  • 10.6 TeraFLOPS single-precision performance with NVIDIA GPU Boost
  • 21.2 TeraFLOPS half-precision performance with NVIDIA GPU Boost
  • 160 GB/s bidirectional interconnect bandwidth with NVIDIA NVLink
  • 720 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory
  • 16 GB of CoWoS HBM2 Stacked Memory
  • Enhanced Programmability with Page Migration Engine and Unified Memory
  • ECC protection for increased reliability
  • Server-optimized for best throughput in the data center
 
Request Quote
 

Suggested Products