As part of an online miniseries, Ben McCarthy, Lead Cyber Security Engineer and Sam Maesschalck, Lead OT Cyber Security Engineer at Immersive discuss their industry predictions for 2026.
McCarthy: I have a decade of experience in low-level reverse engineering and a passion for assembly language. I began my career analysing malware before moving into exploit development.
I’m currently the Lead Cyber Security Engineer at Immersive, specialising in reverse engineering and exploit development.
I’ve been with Immersive since 2018, during which time I’ve uncovered multiple CVEs and trained teams in the UK and US in malware analysis and rapid response.
My interests have now progressed into AI research, benchmarking new agents and finding ways to ensure our AI systems are protected.
Maesschalck: I am the Lead OT Cyber Security Engineer at Immersive, leading the development of technical, hands-on and strategic training and exercises in the field of OT cybersecurity.
I have been with Immersive since January 2025, following roles in the space sector and in academia.
My role involves developing, leading and strategising activities that help teams identify, understand and mitigate cyber threats and challenges across the OT landscape.
I am also involved in research in the areas of cybersecurity, industrial control systems and cyberspace, with over a dozen peer-reviewed papers.
McCarthy: In 2026, the way cybercriminals conduct extortion is expected to change.
Instead of simply threatening to release data, they may threaten to sell it to AI companies desperate for new training material.
The base level of script kiddies will become slightly more effective as AI improves.
New AI security-researcher agents can uncover vulnerabilities in open-source software, potentially giving novice script kiddies usable exploits they do not fully understand.
However, the sophistication of threat actors also relies on stealth, which cannot be replicated by AI. Operational security and protecting themselves from detection after attacks are often the most challenging aspects for attackers and AI will not assist with this.
We are also likely to see a rise in LLM-assisted malware capable of calling AI APIs for new code and adapting in real time to its environment.
While mass “spray and pray” attacks will persist, targeted attacks will remain profitable, with threat actors potentially selling stolen data to AI companies eager for training material.
Maesschalck: In 2026, we’ll see networks designed with IT/OT convergence and security in mind, yet many organisations will still need to secure and support older operational networks with legacy systems.
This coexistence will introduce fresh challenges as vendors accelerate their pursuit of “smarter” and AI-enabled industrial capabilities, even though most operators will not be in a position to adopt such features at scale.
The result is a widening gap between innovation cycles and operational reality, creating new considerations around safety, system integrity and long-term maintainability.
The companies that successfully manage these transitions next year will rely on rigorous change management, extensive testing and intelligent use of planned maintenance windows to control risk while modernising their environments.
2026 will also bring a greater focus on OT- and CNI-specific regulations that recognise the interdependence of IT and OT and the distinct requirements of industrial operations.
This shift is being driven by persistent targeting of OT environments, along with the significant impact of both cyber and non-cyber disruptions affecting CNI worldwide.
Regulations such as NIS2, the UK’s Cyber Security and Resilience Bill and evolving frameworks like ISA/IEC 62443 and NIST 800-82 will push the industry toward more consistent and tailored resilience standards.
Alongside these trends, a broader international security dimension will take shape. OT threats increasingly involve a blend of state actors, criminal groups, hacktivists and proxy collectives operating across borders.
Incidents in one region can influence threat behaviour elsewhere and the introduction of AI-enabled vendor capabilities will add supply-chain dependencies and data-handling questions that span multiple jurisdictions.
This will push organisations to adopt a more globally aware posture, participate in multinational threat-sharing communities and consider how international regulatory developments and geopolitical tensions shape their exposure.
Those who approach OT security with this outward-looking perspective will be far better placed to anticipate and withstand disruption in a rapidly interconnected landscape.
McCarthy: Security leaders shouldn’t fear AI but should equip their teams to use it responsibly. Adoption must be careful, accountable and people-centric to ensure efficiency doesn’t undermine security.
Begin with a strong AI usage policy developed by legal, technical, security and compliance experts. This should include clear data-privacy and security rules and compliance with regulations such as GDPR.
Reinforce this with a defence-in-depth strategy. Following guidance from bodies like the NCSC helps ensure GenAI systems are built on a secure foundation.
A multi-layered approach reduces single points of failure, using controls such as DLP checks, strict input validation and context-aware filtering to block manipulation attempts.
Finally, prepare for failures. Fail-safe mechanisms, automated shutdown procedures, regular backups and configuration checks help limit damage and enable rapid recovery if AI systems malfunction.
Maesschalck: As organisations implement IT technologies and strategies such as AI, zero trust and secure remote access tools to enhance ICS security next year, they will need to ensure that the basics, such as asset management and segmentation, are fully addressed to avoid introducing new issues alongside new technologies.
They will need to prioritise hands-on training, scenario-based exercises and cross-discipline capability building between IT and OT teams.
Those that mature fastest will be the ones investing in continuous education, realistic OT lab environments and workforce development programmes, rather than relying solely on tools and external consultancies.
Alongside specialist training, greater focus should be placed on preparedness through OT-centric cyber ranges and cyber drills.
These exercises must involve operations, engineering, IT and executive leadership, helping organisations practise coordinated response, safe shutdown procedures and recovery.
As cyber risks continue to converge with process safety, drills will become as routine as traditional safety exercises, underscoring that OT security is a whole-organisation responsibility.
Organisations also need to account for the broader international security environment, where OT systems are now targeted by a mix of state actors, criminal groups, hacktivists and proxy collectives operating across borders.
Incidents in one region can influence risk elsewhere, making global situational awareness just as important as internal controls.
Participating in multinational information-sharing communities, tracking international regulatory trends and understanding how global tensions affect threat behaviour will help organisations prepare for disruptions that stem from an increasingly diverse and internationally connected adversary ecosystem.