Using HPC, manufacturing businesses will be able to develop products and test them in a computer lab, saving millions in prototyping and R&D. Energy utilities will use HPC to explore how to operate a low-carbon grid, integrating thousands of roof-top solar panel and electric cars into the 5-minute forecasts. Medical companies will be able to use the vast amounts of genomic data, along with the 3D characteristics of compounds and proteins to develop new drugs targeted as specific genetic markers.
Many companies have shown interest in HPC for improving their business case. Last fall, Lawrence Livermore National Laboratory (LLNL) initiated an open competition for private industry to incorporate HPC modeling and simulation into their development of energy technology. The resulting six winners will apply their computing time allotment in a myriad of ways, from GE Global Research’s modeling of spray in combustion engines to improve efficiency, to Potter Drilling’s simulation of rock properties to improve drilling methods for geothermal power, to United Technologies Research Center’s (UTRC) development of techniques to simulate deep energy retrofits in existing buildings. While most of the six winners have specific problems and computational paths that are readily adapted to a parallel computing environment, all will benefit from the reduced computation (or simulation) time and increased spatial and/or temporal resolution of the simulations they aim to run.
Many have asked of the difference between HPC and “the cloud.” The easy answer is that both HPC and cloud-type environments (and there are many different implementations of the “cloud”), are very good at “embarrassingly parallel” computation – running the same model or equation thousands or millions of times, with small differences between each run, and no interaction between them. But HPC excels in solving problems where different equations are solved in pieces, and where communication between the pieces is needed to solve the overall problem. (A great explanation of the differences between HPC and cloud-based solutions can be found here.)
Some businesses will benefit from HPC, either via the cloud or with in-house “big iron.” The challenge for business is to understand the nature of their problems, how to explore the solutions to those problems, and how trillions of calculations will help solve their problems in a faster, cheaper and more precise way. As the volume of data increases, the speed of business increases, and the complexity of our problems rise together, HPC will be an invaluable tool for industry trying to achieve a more sustainable future.
Noah Goldstein, Ph.D., LEED AP, is the Scientific Lead for Site Sustainability at Lawrence Livermore National Laboratory. His research focuses on energy systems informatics, validating energy efficiency, the Energy-Water Nexus, spatial energy demand modeling, and quantifying sustainability measures in the built environment. Dr. Goldstein has worked in the sustainable building field, focusing on building energy simulation and green building rating systems. Dr. Goldstein has published several peer-reviewed articles on energy informatics, zero-net energy buildings, and simulations of human and natural systems. Dr. Goldstein earned a BA in Biology from UC Santa Cruz and an MA and PhD in Geography from UC Santa Barbara, and is a LEED AP: O+M.