Amid the AI adoption momentum, a critical challenge is coming into sharper focus: how storage infrastructure can keep pace with AI’s soaring data demands across complex, hybrid environments.
Asia is fast becoming a global hub for AI innovation, with enterprises accelerating adoption across sectors.
Yet, amid this momentum, organizations are grappling with fragmented data landscapes spanning on-premise systems, edge locations, and multiple clouds. A critical challenge is coming into sharper focus: how to evolve storage infrastructure to keep pace with AI’s soaring data demands across complex, hybrid environments.
Gartner’s forecast is that 90% of organizations will adopt hybrid cloud through 2027 driven by concerns around cost, scalability and data governance increases. As a result, unified AI-optimized storage is becoming a strategic imperative for businesses aiming to scale AI initiatives while managing cost, complexity, and operational risk.
DigiconAsia.net gathered more insights from Justin Chiah, Vice President and General Manager, APAC Storage and Data Services, HPE.
What does it take to unify fragmented data architectures across edge, core, and cloud, and why is this foundational for AI at scale?
Chiah: Most enterprises deal with silos created over decades of legacy systems, disparate storage environments, and multi-cloud sprawl. In recent times, the amount of data that modern enterprises handle can be overwhelming. From IoT devices to edge, core, and cloud, the data is, however, often trapped in isolated silos.
The foundation for solving the issues faced by enterprises now is an open, unified data platform that abstracts the complexity of where the data physically resides, whether at the edge, in on-premises data centers, or in the public cloud, to extract real-time value from their data. It also takes unifying data across multi-cloud and multi-vendor environments.
For AI at scale, this unification is critical. AI models thrive on massive volumes of high-quality, well-governed data, and if that data is trapped in silos or bogged down by inconsistent architectures, the entire AI initiative stalls. For instance, HPE’s Data Fabric helps eliminate data silos by creating a unified data layer that connects and manages data across cloud, edge, and on-prem environments.
A unified data fabric enables seamless access and real-time analytics on diverse data types without needing to move or duplicate the data. It means organizations can seamlessly move, process, and analyze data where it creates the most value, fueling faster innovation, better insights, and a clear path from experimentation to enterprise-wide AI adoption.
How can enterprises scale AI workloads with agility while keeping costs and energy consumption in check?
Chiah: Enterprises can scale AI workloads with agility by adopting elastic, workload-aware architectures that expand when demand spikes and contract when they don’t, avoiding costly overprovisioning.
Leveraging containerization and orchestration ensures AI models run where they’re most efficient across edge, core, and cloud. At the same time, aligning with energy-efficient designs and intelligent data tiering allows organizations to keep costs predictable and energy use in check, proving that AI at scale can be both high-performance and sustainable.
Therefore, enterprises must have simplified storage and data management across hybrid cloud with one storage platform and one AI-driven cloud experience for every workload.
Enterprises deploying block, file, and object workloads on a single disaggregated cloud-managed architecture can reduce silos and complexity and decrease overprovisioning while scaling capacity and performance independently, leading to lowering 40% storage costs.
Moreover, optimizing sustainability with a modern power-efficient storage architecture enhances performance while reducing energy costs, carbon emissions, and e-waste compared to traditional storage, leading to 45% lower power consumption.
Lastly, with high efficiency for enterprise workloads with advanced data reduction cuts down physical
storage costs, saves energy, and decreases data center footprint, leading to a 30% smaller storage footprint.
Scaling AI workloads responsibly requires more than power, it also demands agile architectures, intelligent data practices, and sustainability by design to avoid spiralling costs and environmental impact.
How is AI not just driving demand for modern storage, but also revolutionizing how storage is managed?
Chiah: Traditional storage architectures are no longer equipped to handle the speed, scale, and complexity of AI-era data. To keep pace, organizations are shifting to intelligent, software-defined platforms that offer dynamic scalability and adaptability.
Modern systems now use AI to predict performance bottlenecks, automatically rebalance workloads, and even take advanced action before disruptions occur. Tasks that once demanded hours, or even days of manual intervention, such as performance tuning, data tiering, and issue resolution, are now accelerated and streamlined through AI-driven insights and automation.
At HPE, we believe in ‘Storage for AI’ and ‘AI for Storage’. Efficient storage and data management are not just nice to have, but they are game changers. They boost agility, simplify operations, accelerate time to value, reduce risks, and pave the way for AI to drive innovation and growth.
Organizations will need enterprise-grade performance at scale with a disaggregated, shared-everything architecture that eliminates traditional bottlenecks. Purpose-built for data lakes, AI/ML/DL, and advanced analytics, your storage solution should accelerate insights, fuel innovation, and unlock faster time-to-value across your hybrid estate. Moreover, you need simplified and unified cloud management of unstructured data services and supports enterprise performance at AI scale to span all the stages of AI and accelerate the most data-intensive AI applications. This enables enterprises to optimize their hybrid estate to take full advantage of AI.
In the AI era, embedding intelligence at the point of data capture and storage is critical to truly understanding and unlocking the value of the data estate. However, no single storage solution can meet the demands of every AI workload.
What lessons can be drawn from enterprises that have successfully reimagined their storage architecture to meet AI demands?
Chiah: Enterprises of today are adopting a data-first mindset by investing in storage that is flexible, scalable, and intelligent across edge, core, and cloud. The most forward-looking organizations are also aligning with energy-efficient designs and models, proving that it’s possible to scale AI responsibly without runaway costs or environmental impact.
For example, London-based Shawbrook Bank transformed its data storage infrastructure to deliver more personalized, data-driven services with greater speed, reliability, and cost efficiency.
Shawbrook Bank was seeking to enhance the customer experience through technology and had invested significantly in on-premises technology, including servers and storage, which it wanted to preserve.
As part of this transformation, Shawbrook deployed HPE Alletra Storage MP B10000 and HPE ProLiant DL360 Gen11 Servers. This significantly improves system efficiency by reducing
latency and increasing uptime, especially during busy periods.