Michael Vallas, Global Technical Principal, Goldilock Secure highlights why modern cyber-resilience relies not just on detecting attacks, but on decisive containment to limit impact.
Organisations have long depended on software-based cybersecurity as their first and strongest line of protection.
Security practices like firewalls, network segmentation, endpoint controls and detection platforms form a familiar foundation for defending systems, data and critical business operations.
Those measures are still vital, but on their own, they are no longer keeping pace with threats.
Threats are being shaped by a new level of automation and sophistication.
AI-enabled attacks are increasing both the volume and adaptability of malware and by the end of last year AI-assisted techniques were expected to represent 50% of threats.
This shift exposes a growing imbalance between threats and cybersecurity.
Many defensive tools still operate within the same digital terrain attackers are exploiting (namely, software), meaning they can be probed or bypassed by adversaries, who have time, intent and automation on their side.
When incidents occur, the question is no longer only whether threats can be detected, but whether they can be contained before disruption and subsequent consequences spread across the business.
Over time, networks gain utility value as they become continuously more integrated with internal services: Linked to cloud platforms, third-party providers, remote work tools, customer-facing applications and supply chain partners.
This ever-increasing connectivity drives efficiency and innovation, but it also inevitably expands the attack surface.
Every new tool, user identity, API connection and remote access route increases network complexity and introduces potential weak points.
Even strong cyber-hygiene cannot eliminate risk entirely because environments are constantly changing and being extended.
In this context, resilience depends on more than keeping attackers out; it also depends on limiting what happens if, where and when they get in.
Meanwhile, AI-driven threats magnify the core challenge of modern security: Speed. Attackers can automate reconnaissance, test defences at scale and shift tactics quickly.
Many security tools are designed to detect known patterns and anomalies, but AI-enabled attacks can vary behaviour and exploit legitimate tools to gain attack advantages while remaining inconspicuous.
Even where threat detection is strong, there is often a gap between seeing an incident and containing it.
Alerts may signal that something is wrong, but they don’t automatically stop the spread.
By the time defenders confirm what is happening, attackers may already have moved laterally or subverted recovery systems.
This is why containment and compartmentalisation on demand are becoming central to cyber-resilience.
Active software system controls remain essential, but they are not always definitive once attackers have penetrated the environment.
Plus, if adversaries gain sufficient access, they typically aim to disable monitoring agents, interfere with logging and manipulate the administrative tools defenders rely on.
Containment-first thinking shifts the security conversation from simply stopping every threat to also preventing any intrusion from becoming a widespread business crisis.
Whether containing an active threat or isolating high-value assets, this approach prioritises architectural control of attack surfaces regardless of software vulnerabilities.
One effective way to strengthen containment is the ability to isolate systems at critical moments and ensure your controls are enforced fully.
Not so much ‘pulling the plug’, but a more sophisticated approach. It proactively defines connectivity only where and when required, with the ability to enforce a decisive break under elevated risk.
Physical connectivity isolation works because it limits the assumptions attackers rely on.
If defenders can separate a high-value system or network segment instantly, lateral movement becomes harder and the blast radius shrinks.
Ultimately, this reduces consequential cyber-loss by containing contagion and safeguarding recovery, ensuring valid backups and disaster recovery.
Ransomware operators usually target backups early because damaging recovery options increases leverage.
Keeping recovery environments out of reach until needed preserves restoration capability and substantially reduces downtime and revenue impact.
Recent incidents have illustrated a clear lesson: Organisations that act decisively to reduce connectivity during an attack can limit spread, preserve recovery options and shorten disruption.
The stakes are increasingly financial as much as technical.
Insurers and business leaders now scrutinise how quickly organisations can contain incidents and protect backup and recovery environments from compromise.
When the blast radius isn’t controlled, a single intrusion can escalate into a multi-million-pound event.
The recent UK Jaguar Land Rover incident serves as a stark reminder as the most expensive reported cyber-incident in European history.
Production was forced to pause for weeks and the most punishing impact came from operational downtime and lost output, exactly the type of scenario underwriters now fear.
The recent attack on the UK retailer Co-op shows the other side of the equation.
The attackers later claimed the retailer’s IT team effectively ‘pulled the plug’, taking key systems offline mid-incident and preventing ransomware from deploying at scale.
Many security commentators have pointed to this as a pragmatic trade-off – accepting short-term disruption to avoid far deeper ongoing damage.
It also helps explain why Co-op appeared to recover faster than M&S, which faced weeks of disrupted online ordering and substantial losses.
Cyber-resilience is moving beyond tool stacks and perimeter thinking.
For years, resilience has been pursued by building higher digital walls and stacking more tools.
But as environments become more interconnected and threats more adaptive, software-based tools alone cannot remove structural exposure of ubiquitous connectivity.
A more durable approach is architectural. Rather than treating connectivity as a permanent default, organisations are redesigning systems so exposure becomes intentional, limited and governed by risk awareness.
Monitoring and response remain critical, but they are strengthened by controls that preserve operational control even when incidents occur.
Threats will continue to evolve, grow in automation and become more capable of exploiting human behaviour and system complexity as much as nuanced technical flaws.
In such an environment, expecting software defences alone to stay ahead of threats is increasingly unrealistic.
The future of cyber-resilience will be defined by a blend of strong detection and decisive containment.
Organisations that can control exposure, reduce connectivity at the right time and place will be best positioned to withstand modern attacks.
Containment-first design does not replace software defences – it bolsters them, becoming a foundational layer of security that both dramatically reduces an attack surface and ensures that even when threats do break through, they do not automatically become business crises.
This article was originally published in the February edition of Security Journal UK. To read your FREE digital edition, click here.