A petaflop is the ability of a computer to do one quadrillion floating point operations per second (FLOPS). Additionally, a petaflop can be measured as one thousand teraflops.
A petaflop computer requires a massive number of computers working in parallel on the same problem. Applications might include real-time nuclear magnetic resonance imaging during surgery or even astrophysical simulation.
Nvidia and the National Energy Research Scientific Computing Center (NERSC) have flipped the “on” switch for Perlmutter, billed as the world’s fastest supercomputer for AI workloads.
Named for astrophysicist Saul Perlmutter, the new supercomputer boasts 6,144 NVIDIA A100 Tensor Core GPUs and will be tasked with stitching together the largest ever 3D map of the visible universe, among other projects, Venture Beat reported.
Perlmutter is “the fastest system on the planet” at processing workloads with the 16-bit and 32-bit mixed-precision math used in artificial intelligence (AI) applications, said Nvidia global HPC/AI product marketing lead Dion Harris during a press briefing earlier this week. Adv.