K

Kathleen Martin

Guest
Scientific supercomputing is not immune to the wave of machine learning that's swept the tech world. Those using supercomputers to uncover the structure of the universe, discover new molecules, and predict the global climate are increasingly using neural networks to do so. And as is long-standing tradition in the field of high-performance computing, it's all going to be measured down to the last floating-point operation.
Twice a year, Top500.org publishes a ranking of raw computing power using a value called Rmax, derived from benchmark software called Linpack. By that measure, it's been a bit of a dull year. The ranking of the top nine systems are unchanged from June, with Japan's Supercomputer Fugaku on top at 442,010 trillion floating point operations per second. That leaves the Fujitsu-built system a bit shy of the long-sought goal of exascale computing—one-thousand trillion 64-bit floating-point operations per second, or exaflops.
But by another measure—one more related to AI—Fugagku and its competitor the Summit supercomputer at Oak Ridge National Laboratory have already passed the exascale mark. That benchmark, called HPL-AI, measures a system's performance using the lower-precision numbers—16-bits or less—common to neural network computing. Using that yardstick, Fugaku hits 2 exaflops (no change from June 2021) and Summit reaches 1.4 (a 23 percent increase).
By one benchmark, related to AI, Japan's Fugaku and the U.S.'s Summit supercomputers are already doing exascale computing.
But HPL-AI isn't really how AI is done in supercomputers today. Enter MLCommons, the industry organization that's been setting realistic tests for AI systems of all sizes. It released results from version 1.0 of its high-performance computing benchmarks, called MLPerf HPC, this week.
Continue reading: https://spectrum.ieee.org/ai-supercomputer
 

Attachments

  • p0005848.m05501.logo_ieee_spectrum.png
    p0005848.m05501.logo_ieee_spectrum.png
    24.1 KB · Views: 9