Every six months or so, a tempest brews around the details of the newest smartphone. These devices - small and elegant, powerful, while easy to use - are a marvel of modern technology. Yet there is a less visible but more profound innovation phenomenon under way at the other end of the spectrum – that of “Big Iron” and high performance computing (HPC). “Big Iron” is used to refer to large computers; large, parallel computers with thousands or millions of core processors, gigabytes of memory, and dazzling speeds. HPC is at the heart of a new “computational revolution,” one that is changing industry, business, and science in a fundamental way.
The performance of HPC systems is impressive, in terms of the massive amount of data they can use, the numbers of equations they can solve and their speed. Typically, HPC is thought of as calculating at teraFLOPS speeds, or faster. A FLOP is a FLoating-point OPerations, and the metric “FLOPS” is the number of FLOPs in one second. One terraFLOPS is one trillion (1012) calculations per second. That’s fast.
Current HPC computing facilities are operating at hundreds to thousands of terraFLOPS. These HPC facilities have 100-1,000,000 times the processing power of a standard laptop computer. Right now, Lawrence Livermore National Laboratory is installing the IBM system, “Sequoia,” a 20 petaFLOPS system (1 petaFLOPS = 1,000 teraFLOPS), but they are not alone; just check out the Top500 – a ranking of the fastest, most powerful computers on the planet.
But what does this massive computing power mean for business and industry? Virtually every major industry, from medicine to manufacturing, communications to design will benefit from HPC. HPC enables simulation and modeling of systems in ways never thought possible. Some business will “just” use HPC to perform calculations at a greater speed than before, reducing calculation and simulation run-times from weeks to mere seconds. Other companies will leverage HPC as a way to solve larger problems.
“Larger problems” can be thought of in many ways. Companies can model a problem with finer grain model resolution (e.g. from meters- to centimeter-scale or from daily- to seconds-long time steps), or they can attempt to capture the variability in the systems they are modeling. More significantly though is the power of HPC to simultaneously solve equations spread out on multiple parallel computers at a single time. In HPC, every computer processor works on a part of a complex problem, coming to a solution by working in concert with every other processor needed for the undertaking.