AI isn’t just transforming technology. It’s testing the limits of national grids. What can be done before our power grids get overstrained?
Emerging AI server racks are approaching power densities once unthinkable: hundreds of kilowatts per rack, with megawatt-class designs – enough to supply power to thousands of homes.
Against the backdrop of accelerating digital growth in Asia, this leap can result in a synchronized spike in demand that risks destabilizing even the most reliable power systems. The challenge is clear: bold new approaches must be made, or the very infrastructure driving technology innovations could just as easily become its breaking point.
DigiconAsia.net caught up with some insights into the issues and possible solutions from Joshua (JP) Buzzell, VP and Data Center Chief Architect, Eaton, who spoke at Data Centre World 2025 during Tech Week Singapore in October 2025.
In what ways is AI testing the limits of national power grids in Asia Pacific economies?
JP: AI is fundamentally reshaping power demand across APAC. It is creating a significant surge in electricity demand that existing infrastructure, initially designed for traditional computing loads, is ill-equipped to handle. Worldwide data center electricity demand is poised to double to 945 terawatt-hours by 2030, a dramatic increase that exemplifies the scale of the challenge.
This challenge is especially prominent in Southeast Asia, a region with ageing power infrastructure that is also positioning itself as a major data center hub. Countries like Malaysia, Indonesia and the Philippines are attracting huge investments in data centers due to factors like lower costs, and favorable policies, yet the strain of rapidly increasing power demands is already evident.
In Johor, Malaysia, nearly 3% of data center applications were rejected because of unsustainable practices, specifically concerns about excessive water and power usage.
Governments will need to critically assess existing energy grid infrastructure and make the necessary investments to reinforce grid reliability, integrate renewable energy and support the power demands of AI and data center growth sustainably. Otherwise, this may present an imbalance between digital economic ambitions and energy stability and sustainability.
With AI’s energy spikes posing a real threat to grid stability, how should data centers in the region be redesigned to absorb and manage these spikes?
JP: AI’s energy spikes are of substantial speed and scale. We are dealing with load swings, i.e. sudden and significant changes in power demand, that happen in microseconds. This is starkly different from traditional computing, where such swings might have happened over hours or days. When these rapid fluctuations occur, the power grid may not have enough time to react or stabilize.
This forces equipment to work at an unpredictable rate, creating ripple effects that may lead to long-lasting infrastructural damage, such as transformer overheating, ferro resonant damage and equipment failure.
Data center operators will need to pay attention to power quality metrics, and take preventive action to ensure the network is in optimal condition. We are seeing a growing interest in power quality meters that can detect AI power bursts, as well as intelligent power management software that uses AI and predictive analytics to anticipate future stressors and detect anomalies in real time.
There is also potential to reimagine the data center as an energy asset. By equipping the data center with energy generation and balancing capabilities, this helps to reduce the strain on the utilities grid during peak consumption periods.
For instance, grid-interactive battery energy storage systems (BESS) and uninterruptible power supply (UPS) can help store and dispatch renewable energy when required, allowing data centers to improve overall grid reliability, while tapping on renewable energy sources.
What role does liquid cooling play in enabling resilient, high-density clusters?
JP: We are now well past the point where air cooling might suffice. In many modern data centers, especially those supporting AI and high-performance workloads, it simply can’t keep pace. And the numbers tell the story: according to Uptime Institute, the average rack power density has jumped by about 38% between 2022 and 2024. With AI advancing every single day, this number will only continue to increase exponentially.
Liquid cooling plays several crucial roles in this shift. At the most basic level, it allows us to manage thermal load precisely where it matters: directly at the chip. Liquid systems can extract heat with far greater efficiency, preventing hotspots and creating a uniform thermal environment. That predictability is what allows dense clusters to operate consistently, without the risk of performance throttling or premature failures.
Resilience is the other half of the story. When thermal stability improves, the system is less vulnerable to unexpected slowdowns or outages. Operators gain confidence that their clusters can keep performing under sustained load, even as applications grow more demanding. In practical terms, that means lower maintenance, fewer interventions, and a steadier experience for the customers who rely on these systems.
Beyond that, liquid cooling systems open up new design possibilities. By removing the limitations of air cooling, we can build denser racks, more compact data halls, and facilities that deliver more compute per square meter of real estate. In markets like Singapore or other dense Asian cities, where land space is scarce, this becomes a decisive advantage.
What about the emerging role of medium-voltage solid-state transformers (MVSSTs) as next-gen safeguards for power grids?
JP: Conventional transformers were designed for an earlier era, one where electricity demand rose and fell in relatively steady patterns. With today’s grids facing rapid, irregular stresses — be it from AI clusters or variable output from renewables — that’s where solid-state technology provides the speed and control conventional equipment cannot.
By using power electronics instead of passive coils, medium-voltage solid-state transformers (MVSSTs) actively filter, stabilize, and condition power flows in real time. Instead of relying only on coils of copper and iron to step electricity up or down, they use advanced electronics (semiconductor-based power switches and their control systems) to actively manage how power flows. That gives them two important advantages:
- They can react far faster to changes in demand, almost instantly smoothing out spikes or dips that would normally put stress on the grid.
- They can improve the quality of the power being delivered, filtering out disturbances before they spread further down the line.
This means that MVSSTs can react almost instantly to fluctuations, offering the kind of responsiveness modern grids increasingly require. That speed is what positions them as “next-generation safeguards”, and makes them particularly suited to environments where resilience depends on speed, whether it’s a data center checkpointing thousands of GPUs or a solar farm suddenly shifting output as clouds pass overhead.
It is important to acknowledge that while solid-state transformer technology is moving rapidly toward commercialization, many implementations globally and especially in Asia remain at pilot or early deployment stages rather than full scale grid use. But momentum is building.
Industry forecasts suggest the global solid-state transformer market will reach US$468 million by 2028, with Asia-Pacific growing at the fastest CAGR of 18.6%, supported by significant investments in China, India, Japan, and Australia to modernize grids and expand EV infrastructure.
As these investments take shape, I’m optimistic about the potential that MVSSTs have in reinforcing the stability of energy grids.