Future-proofing storage for AI

February 16, 2026
Future-proofing storage for AI

Alex Segeda, Business Development Manager, EMEAI at Western Digital (WD) discusses why storage must be designed as an active, scalable foundation for AI-driven infrastructure rather than a passive layer.

Could you introduce yourself and your role at WD?

I’m Alex Segeda and have more than 20 years in the IT industry doing engineering, sales and business development.

Today I’m responsible for scaling WD’s HDD and platforms business across the EMEAI region (Europe, Middle East, Africa and India).

My focus is on driving sustainable growth by aligning advanced storage technology with real business outcomes, particularly in data-intensive environments such as regional cloud service providers (CSPs), neo cloud, HPC, academia and system integrators.

I started my career as a software developer, which gave me a strong technical foundation and a deep appreciation for how systems are built from the ground up.

Over time, I moved into sales leadership and business development roles for global technology and storage companies.

That journey has allowed me to bridge two worlds: Deep technical understanding and real-world customer execution.

After working across multiple countries, what has most shaped your approach to scaling IT infrastructure?

Working internationally and cross-functionally has reinforced one principle above all others: Customer-centricity through collaboration.

Infrastructure doesn’t scale successfully in isolation. It scales when technology teams, partners and customers are aligned around workloads, clear use cases and long-term goals.

Different regions have very different IT maturity levels, regulatory requirements and economic realities.

For example, smart city projects in Europe often prioritise data sovereignty and energy efficiency, while deployments in the Middle East or India focus more on rapid scalability and real-time insights.

A one-size-fits-all infrastructure strategy simply doesn’t work.

This is why close collaboration across engineering, sales and operations is a high priority for me.

At WD, it’s one of our core values to come together as one team and carefully listen to and understand customers’ needs.

The most successful infrastructure projects are those where vendors act as long-term partners, not just one-time technology providers.

With the company’s 50+ year history of storage innovation, we bring a wealth of experience in building reliable architectures that can scale with the massive volumes of the zettabyte age.

According to IDC, the annual volume of data generated is expected to more than double to 527.5 ZB by the end of 2029.

That reality demands infrastructure strategies built around scalability, flexibility and reliability with the right economics.

How has your definition of ‘future-proof infrastructure’ evolved with the rise of AI?

Earlier in my career, future-proofing infrastructure largely meant investing in more compute power or faster storage.

Performance dominated the discussions. But the rise of AI and other data-hungry workloads has fundamentally changed that equation.

The rapid adoption of AI has made one thing very clear: The digital world runs on data and data often lives on high-capacity HDD storage.

Today, no digital service, no smart video system and no AI model can function without mass-capacity, scalable storage, especially HDDs, at its core.

While high-performance compute remains important, the ability to store, retain and access vast volumes of data economically at scale is critical to unlocking business success in the AI era.

Future-proof infrastructure is no longer just about raw speed.

It’s about finding the right balance to support next-generation workloads: Combining performance where it matters with efficient, high-capacity storage that can scale for the long term.

What are organisations most often underestimating when preparing their IT infrastructure for AI?

Many infrastructure discussions today focus heavily on GPUs, AI frameworks and the availability of skilled professionals.

While all of these are important, organisations often underestimate, sometimes even overlook, the fundamental role of storage, especially HDDs, in AI-ready architectures.

HDDs are the backbone of scalable IT architectures. They are one of the most economical ways to balance massive datasets and performance at scale.

Precisely because of this, IDC estimates that HDDs will continue to make up almost 80% of installed storage used in hyperscale and cloud data centres by 2028.

In AI-driven video environments, storage is so much more than a passive repository.

It feeds training data, supports model refinement, and enables retrospective analysis. Underestimating storage requirements leads to bottlenecks, rising costs and limited AI outcomes.

Simply put, without a reliable storage foundation, even the most advanced AI initiatives will struggle to deliver value.

What infrastructure lessons from enterprise storage still apply and which no longer hold in the age of AI?

Several core principles from enterprise storage remain highly relevant.

Reliability, data protection and lifecycle management are more important than ever.

In surveillance and smart video systems, downtime or data loss is not just inconvenient.

It can have legal, financial and safety implications. Proven practices around redundancy, backup and data integrity still apply.

What no longer holds is the idea that storage can be treated ‘on-the-go’ or as an afterthought. In the age of AI, data storage and especially HDDs are an active part of the innovation pipeline.

Data must be accessible and scalable across edge, core and cloud environments.

Architectures must be designed for AI and video workloads, where data growth is exponential, and access patterns are constantly changing.

Another outdated assumption is that higher performance always equals better outcomes.

AI-driven environments often require tiered architectures, where high-performance storage is used selectively for the specific workloads, whilst high-capacity HDDs handle the bulk of data efficiently.

Modern enterprise storage architectures should be workload specific.

How do you see AI-driven infrastructure requirements evolving as we move through 2026?

Throughout 2026, we expect AI-driven storage infrastructure will evolve to be more workload-specific and more strategic than before.

In the zettabyte age, data storage can no longer be a secondary layer.

Next-generation workloads place unprecedented pressure on storage systems to perform and scale sustainably.

Energy efficiency, density and total cost of ownership will become increasingly important, especially for organisations facing rising energy costs and environmental targets.

Across all sectors, infrastructures must ingest massive data streams, store them efficiently for years and support increasingly sophisticated AI analytics.

HDDs and storage platforms will continue to play a central role, complemented by compute acceleration where needed.

The winners of the AI race will be organisations that design their storage architectures not just for today’s AI models, but for the data growth and intelligence demands of the next decade.

This article was originally published in the February edition of Security Journal UK. To read your FREE digital edition, click here.

Read Next

Security Journal UK

Subscribe Now

Subscribe
Apply
£99.99 for each year
No payment items has been selected yet