Discover how industry leaders and governments are tackling these challenges to shape the future of GPU-driven DC infrastructure.
The Data Center (DC) market in Southeast Asia (SEA) is at an inflection point. In a short period, it has emerged as one of the fastest-growing DC regions globally. To lead this surge, stakeholders must prepare for AI-driven demands now.
This trend will be supercharged by rapid growth in AI compute requirements. Moving forward, AI implementations will require much greater compute power than other processes, as power-hungry graphics processing units (GPUs) work to meet demand. In 2024, the need for energy-efficient strategies became clear. This year, DCs should act on them:
- Scale GPU capacity by Q3 2025 to support AI workloads
- Track compute needs quarterly to avoid bottlenecks
Already, targeted upgrades are on the table, driving changes in DC builds that will advance traditional DCs and AI factories.
Reviewing the AI-driven energy options
As the race to support AI compute power gains steam, the delivery model is evolving: AI as a service is paving a smooth road for enterprise adoption of generative AI (GenAI), which can fill roles from customer service to financial planning. DCs should capitalize on this:
- Offer AI-as-a-Service platforms by mid-2025 to attract clients
DCs are increasingly using GenAI to address the lack of skilled IT staff, with AI monitoring, managing, and supporting lean teams. This intuitive approach boosts efficiency and eases labor stresses. To implement:
- Deploy AI monitoring tools by Q4 2025 to streamline operations
However, GenAI systems can use up to 33x more energy than non-AI processes, with computational power doubling roughly every 100 days. As these systems grow, DCs will multiply, spiking energy use. With power demands rising, reliable grids and efficient generation are critical. DCs should:
- Secure renewable energy contracts by 2026 for stability
- Support studies for small nuclear reactors by Q3 2025
Simultaneously, DCs in the region should reduce energy consumption — as a matter of economics and environmental responsibility — by deploying liquid cooling systems instead of less-efficient air cooling. As GPU-powered AI compute scales, cooling efficiencies are critical, especially with SEA’s climate and heatwaves causing outages. Take action:
- Install liquid cooling for GPUs by mid-2025 to cut energy costs
- Use heat sensors daily to prevent downtime
Optimizing the profile of DC infrastructure
Related to power and cooling, DCs’ fiber infrastructure must become denser in AI compute facilities. GPUs in AI arrays need full networking: every GPU must connect to every other, increasing complexity and cooling challenges. DCs should use compact fiber systems to manage connections, packing more fibers into the existing footprint to power AI networks:
- Upgrade to compact fiber by Q4 2025 for GPU networks
- Plan for native 800G bandwidth by 2026 to handle data surges
As DCs migrate from 2x400G (aggregate 800G) to native 800G and beyond, this fiber infrastructure will provide capacity for future upgrades.
A holistic regional approach needed
The changes coming to DCs in this AI dawn will unlock opportunities in SEA. While shortages of skilled IT expertise persist, AI is showing ways to fill gaps with autonomous monitoring and management. DCs should:
- Pilot AI management tools by Q3 2025 to reduce staffing needs
The road ahead for SEA’s DC industry is clear: though association fragmented, governments must take steps to foster collaboration. The Asia-Pacific Data Centre Association aims to enhance cooperation, set standards, and strengthen sustainability. To drive progress:
- Join the APADCA by mid-2025 to shape policies
- Target a 10% emissions reduction by 2026
Such collaborations are key to driving positive industry impact and represent a significant step toward sustaining SEA’s progress and potential as a global DC powerhouse.