Designed to throw everything out of the water when it comes to Artificial Intelligence (AI) and Machine Learning, Nvidia’s A100 GPU in a 40GB form factor for the DGX Station was already a beast providing the computational resources required by large-scale organizations like GSK to perform their operations
It seems that this just isn’t enough for Nvidia as they have unveiled a bigger version of the same GPU that comes with a staggering 80GB of memory. The DGX Station A100 based on 4 A100 GPUs (80GB) can provide up to 2.5 petaFLOPS of AI compute power that you can use for training, inference, and data analytics.
According to Nvidia, the new DGX A100 station is a system, “Effortlessly providing multiple, simultaneous users with a centralized AI resource, DGX Station A100 is the workgroup appliance for the age of AI. It’s capable of running training, inference, and analytics workloads in parallel, and with MIG, it can provide up to 28 separate GPU devices to individual users and jobs so that activity is contained and doesn’t impact performance across the system.”
The system comes with a Single AMD 7742 CPU, 64 cores, 2.25 GHz (base)–3.4 GHz (max boost) along with 512GB of DDR4 memory and a 1x 1.92 TB NVME drive for the OS with a 7.68 TB U.2 NVME drive for internal storage.
Image Source: Nvidia