What lies ahead in the real world as supercomputers become mainstream in a digital world of pervasive AI and quantum computing?
The power of supercomputing isn’t just for national research labs or large global corporations anymore. The influence of supercomputers – or what are often also called high-performance computers (HPCs) – is fast expanding to shape the future of diverse industries.
Of course, AI, quantum computing, GPUs and neural processing units (NPUs) are all part of the equation that’s making supercomputing a critical part of our digital future.
Beyond the usual suspects of scientific research and finance, the technology community is watching eagerly to see what lesser-known sectors and industries are poised to be next in line for the HPC revolution.
DigiconAsia finds out more about the evolving HPC landscape and its adoption in various industries, and the subsequent impact on the design and management of data centers, in an interview with Nicholas Malaya, Fellow, High-Performance Computing, AMD.
Beyond research labs and Wall Street, which industries stand to benefit most from HPC adoption in the next 5-10 years?
Nicholas Malaya (NM): High-performance computing (HPC) systems provide broad society benefit. HPC systems are used to solve the world’s most complex problems. These use cases span industries and applications across the planet, including energy, engineering, healthcare, automotive, aerospace, and more.
Notably, there are exciting developments within the healthcare industry where HPCs can be used to identify new treatments for diseases, specifically Alzheimer’s and cardiovascular diseases:
- The University of California is currently using an HPC system powered by AMD to better understand the root cause of Alzheimer’s disease. By running simulations to derive the first-ever mechanistic insights, the team is now able to uncover pathways to better control thought, language, and memory.
- The University of Utah uses the same compute system at SDSC’s Expanse centre to model a treatment for atherosclerosis, a major cause of human cardiovascular disease. Breakthroughs in this area of science could help millions of people around the world mitigate the impact of diseases that are often fatal.
How is AI defining the next era of computing? How is the convergence of AI and HPC driving new opportunities for enterprises?
NM: The HPC and AI landscapes continue to undergo significant transformation as semiconductor technology continues to evolve. Less than three years ago, we observed the deployment of the world’s first Exascale computer, Frontier at Oak Ridge National Laboratory, which was powered by AMD EPYC CPUs and Instinct GPUs.
Now, in 2024, we are about to witness the release of a new exascale supercomputer, El Capitan, which is expected to perform at 2 exaflops.
As an example, the AMD journey in the HPC and AI sectors has been marked by continuous innovation and strategic partnerships. The AMD Instinct accelerators are a testament to this progress, offering cutting-edge architecture and technology designed specifically for deep learning, scientific research and complex computational tasks.
These powerful computing solutions are setting new standards in processing capabilities, efficiency, and scalability – making them the go-to-choice for organizations and research institutions globally. Notably, they enable leadership performance for the modern data center at any scale – whether its single-server solutions or even the world’s largest, exascale-class supercomputers – which are uniquely engineered to power the most demanding AI and HPC workloads.
All data centers in the future will require supercomputing technologies to support new modelling, simulation, analytics, and AI workflows. As such, enterprises that make transformation a priority now will have the ability to disrupt and lead in the age of insight.
What are some use cases and areas that make an excellent opportunity to combine HPC and AI?
NM: The capabilities of HPC systems have expanded into a broader range of applications over the past few years. We now have more processing power and memory than ever before to solve more complex problems.
- Machine learning (ML). HPC systems provide the necessary computing power for highly advanced ML to analyze and validate vast amounts of data. One specific use case is the use of HPC for cancer research to detect melanoma in photos.
- Big data analytics. Rapid comparison and correlation of massive data sets are used to supplement research and solve problems in academic, scientific, finance, business, health, cybersecurity, and government applications. This work requires massive throughput and compute capabilities.
- Advanced modeling and simulation: Without having to do a physical prototype in early stages, advanced modeling and simulation allow companies to save time, materials, and staffing costs in bringing innovative products to market faster. HPC modeling and simulation are used for drug discovery and testing, energy-efficient designs for automotive and aerospace, prediction of extreme climate/weather systems, and advanced materials and energy applications.
What are the biggest obstacles hindering wider HPC adoption? How can they be overcome?
NM: Sustainability: As businesses harness more data to take on greater challenges, the resulting compute and analytics needs are being confined by resources such as power, space, cooling, and budgets, making the need for efficient IT more critical than ever.
A sizeable and growing fraction of the global energy budget is being used in data centers, and the performance demands of these workloads are driving energy consumption demand for energy efficient solutions. Performance per watt and energy efficiency are more important than ever.
Performance: Massive data growth has put tremendous pressure on outdated systems and created an urgent need to extract more value from data using advanced tools such as modeling and simulation, analytics, AI, and machine learning (ML). These workloads require HPC to deliver accurate and real time results while increasing infrastructure utilization and ROI.
To tackle the most data-intensive workloads, businesses must develop modern compute environments that are highly performant, cost-effective, and flexible. Adopting right-sized HPC and supercomputing technologies enables businesses to harness faster insights and gain success, now and far into the future.
Supercomputing in data centers requires two things: power management and temperature measurement. Therefore, while predicting what the future of HPC will look like, it is also worth noting that data centers in Asia will have to evolve to move HPC beyond proof of concept.