NVIDIA’s A100 80GB GPU for AI Applications

Robotic Gizmos

NVIDIA has a new powerful GPU for complex, memory intensive AI applications. The A100 80GB GPU has twice the memory of the previous version. It can deliver 2 terabytes per second of memory bandwidth. The A100 80GB can deliver faster performance while allowing training with large models.

The A100 can be partitioned into up to 7 GPU instances, each with 10GB of memory. It has 3rd gen Tensor cores with up to 20x AI throughput of the previous generation. With NVLink & NVSwitch, you can enjoy twice the GPU-to-GPU bandwidth of the previous generation.
[HT]
The post NVIDIA’s A100 80GB GPU for AI Applications appeared first on Robotic Gizmos.
Source: roboticgizmos

Leave a Reply