As AI and HPC continue to pick up momentum in the region, data centers in Asia Pacific are experiencing a boom despite global economic headwinds and geopolitical uncertainties.
The clarion call for sustainability – amid the fast-growing need to store, process and transmit huge amount of data in today’s AI boom – has put data center operators at the crossroad of decisions regarding design and technology.
The momentum for high-performance computing (HPC) and quantum computing is aggravating the situation. What are data centers in Asia Pacific supposed to do in this bittersweet dilemma?
We drew out some insights from Peter Huang, Global President of Thermal Management & Data Center, Castrol ON at the Data Center World Asia (DCWA) event in Singapore last month:
How and why is the data center boom in Asia Pacific, especially in hubs like Singapore, Johor, Batam, and Australia, accelerating interest in liquid cooling?
Huang: The data center boom across the Asia Pacific, particularly in burgeoning hubs like Singapore, Johor, Batam, and Australia, is rapidly accelerating interest in liquid cooling due to several factors. This region is experiencing exponential growth in digital services, AI, and high-performance computing (HPC), driving a high demand for processing power.
Traditional air-cooling systems are increasingly challenged in managing the escalating computational density and the intense heat generated by next-generation chips, especially those powering AI and HPC workloads. Rack densities are climbing far beyond the 50 kilowatts that air cooling can handle, with some deployments pushing towards hundreds of kilowatts per rack.
Furthermore, in tropical climates prevalent across much of Asia, high temperatures and humidity make efficient thermal management even more challenging. Coupled with increasingly stringent energy efficiency regulations and limitations on land and water use, the need for scalable, high-density cooling has become a strategic imperative.
Liquid cooling offers superior thermal management, significantly enhances energy efficiency, and reduces energy and water consumption as compared to traditional air-cooling systems. This makes it not just an option, but increasingly viewed as a critical technology for the region’s digital future.
As the first large-scale liquid cooling projects come online, what can data center operators in the region learn from these early adopters?
Huang: The first large-scale liquid cooling projects in the Asia Pacific region confirm that this is no longer an experiment. It is an operational priority for high-density computing. Data center operators must internalize three core strategic lessons from these early adopters:
- Resilience relies on fluid integrity: The critical challenge is reliability. Liquid cooling introduces a new profile where the fluid itself is integral to the system. Operators must prioritize proactive, continuous monitoring of fluid health (chemistry, filtration, and flow) to prevent contaminants or corrosion, which can cause efficiency loss or catastrophic damage to high-value hardware. Redundancy must be re-evaluated down to the rack level, and specialized maintenance training is imperative.
- PUE is insufficient as a standalone metric: Liquid cooling fundamentally changes the efficiency equation. Operators must rethink metrics beyond traditional PUE (Power Usage Effectiveness), as liquid-cooled servers can paradoxically drive PUE higher while consuming less total power. A more nuanced approach, or the adoption of alternative metrics, is needed to accurately communicate the significant gains achieved in Total Usage Effectiveness (TUE) and energy consumption.
- Success requires strategic co-engineering: The cooling system is now the fastest-evolving and most critical part of the data center. Success hinges on deep collaboration and co-engineering from the design stage. Operators must actively partner with fluid specialists and system manufacturers to accelerate innovation, ensure mechanical integrity, and align the chosen approach, be it direct-to-chip, immersion, or a hybrid model, with the long-term scalability and sustainability demands of future AI-driven workloads. This proactive design is the only way to de-risk large-scale capital expenditure.
How should data centers support the next generation of AI and HPC workloads?
- Mandate high-density liquid cooling: The immediate requirement is to integrate liquid cooling (D2C or immersion) as the core thermal strategy. Air cooling cannot sustain the dense computational demands of modern AI/HPC, and liquid cooling is increasingly required to maintain optimal performance and prevent hardware damage.
- Design for scalable resilience: Future-ready data centers must be designed with modularity and scalability. This allows for the integration of rapidly evolving cooling technology and increasing power densities without costly overhauls. Furthermore, this resilience must include end-to-end management services focused on the fluid’s continuous health and compliance.
- Leverage fluid science and strategic alliances: Success in this transition depends on expertise. Data centers should actively pursue strategic partnerships with specialists who bring deep knowledge in fluid science and thermal engineering. Collaborating with manufacturers and system integrators is the most efficient way to ensure compatibility, accelerate adoption, and maximize energy and water efficiency to meet critical sustainability targets.
How are operators reducing the complexity and cost of ownership to futureproof the data center ecosystem to ensure the sustainability and growth of Asia Pacific’s digital economy?
Huang: Data center operators in the Asia Pacific region are strategically mitigating complexity and reducing the total cost of ownership by executing a fundamental pivot in infrastructure design. This is driven by the necessity of end-to-end thermal governance and the strategic imperative of sustainability.
Operators are increasingly adopting comprehensive liquid cooling solutions, recognizing that maximizing efficiency (PUE/WUE) is paramount for viability and growth. They are now prioritizing holistic, lifecycle management services, similar to those provided by Castrol ON, which manage everything from deployment and monitoring to responsible waste management.
This integrated approach minimizes operational risk, supports compliance, and provides the path to long-term cost predictability in high-density environments.
To successfully design an ecosystem for AI and HPC with long-term adaptability, operators are focusing on strategic co-engineering and modularity. This involves designing data centers for flexible scalability and actively forging strong partnerships with specialized fluid providers, system integrators, and hardware manufacturers.
These alliances simplify the adoption of advanced cooling, reduce the execution learning curve, and aim to build infrastructure that is adaptable to exponential workload growth, thereby supporting the sustainable future of the region’s digital economy.