hamburger-nav-icon
region-flag-icon
Search by Category
Audio
Cameras
Cases & Bags
Computers
Conferencing
Content Management
Control
Displays
Furniture
Lighting & Studio
Mounts & Rigging
Networking & Cabling
Power
Presentation
Production
Residential
Security & Safety
Signal Management
Search by Manufacturer
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
#
Search by Category
NVIDIA - NVIDIA Tesla P40
Request
More Information
Find
A Reseller

Accelerator for Inference Throughput Server

Model: NVIDIA Tesla P40

  • World’s fastest processor for inference workloads
  • 47 TOPS of INT8 for maximum inference throughput and responsiveness
  • Hardware-decode engine capable of transcoding
  • Inferencing 35 HD video streams in real time
Project List
Product Info
Tech Specs
Related Products
Documents

Tesla P40 offers great inference performance, INT8 precision and 24GB of on board memory for an amazing user experience. A Tesla P40 is purpose-built to deliver maximum throughput for deep learning deployment. With 47 TOPS of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers.

 

 

  • World’s fastest processor for inference workloads
  • 47 TOPS of INT8 for maximum inference throughput and responsiveness
  • Hardware-decode engine capable of transcoding
  • Inferencing 35 HD video streams in real time
 
Request Information
 

Suggested Products