Technology
NVIDIA A100
The NVIDIA A100 Tensor Core GPU (Ampere architecture) is the flagship accelerator for AI, HPC, and data analytics, delivering up to 20x performance gain over its predecessor.
The A100 is the engine of the NVIDIA data center platform: It delivers unprecedented acceleration for AI, HPC, and data analytics workloads. Built on the Ampere architecture, the A100 features third-generation Tensor Cores and up to 80GB of HBM2e memory, achieving over 2 TB/s of memory bandwidth for massive datasets. Its Multi-Instance GPU (MIG) technology allows partitioning a single A100 into up to seven isolated GPU instances, optimizing resource utilization for workloads of any size. This architecture provides up to 312 TFLOPS of deep learning performance (TF32) and up to 20x higher throughput versus the prior Volta generation.
Related technologies
Recent Talks & Demos
Showing 1-5 of 5