In this SJUK exclusive, Sunil Agrawal, CISO at Glean discusses why agents remain the missing layer in the UK’s AI security strategy.
Table of Contents
ToggleThe UK is taking a welcome leadership position on secure AI infrastructure.
The joint DSIT–NCSC call for information on secure AI infrastructure focuses on protecting model weights, high‑performance compute and the hardware stack that underpins frontier AI development.
That work is essential. But on its own, it will not stop the next generation of AI‑driven incidents.
By concentrating primarily on chips, clusters and model weights, current policy risks overlook where AI systems actually create business value and real‑world harm: The agent layer.
The layer where AI accesses data and take action on behalf of people and institutions.
This is where AI leaves the realm of research and starts to actually do things.
Anthropic showed that when agents can code, bad actors can trick them into believing they’re doing legitimate work, turning a chain of seemingly harmless steps into a serious security threat.
OpenClaw, a highly unpredictable agent network with broad permissions and access, illustrated the same danger – it could take real, uncontrolled actions without guardrails and was vulnerable to malware and data exfiltration.
If we look at the problem through numbers, Gartner predicts that by the end of 2026, 40% of enterprise applications will feature task-specific AI agents, up from less than 5% today, and that’s just the enterprise side, not the consumer flood that’s coming too.
For the UK to remain a trusted location for AI development and deployment, the agent layer must be protected as it becomes the primary way most people consume AI.
This layer is far more exposed to misuse compared to that of data centres and networks operated by a small number of highly trained professionals.
We’re about to see a surge in agents entering the market, meaning CIOs and CISOs are largely underprepared for what’s coming.
Recent research from BigID’s ‘AI Risk & Readiness in the Enterprise: 2025 Report’ highlighted the scale of the problem with the research finding:
This is the capability–governance gap: AI agents are moving from experiments to production faster than security controls are being adapted to govern them.
If we extrapolate this trend into late 2026, it is easy to imagine a typical UK enterprise where:
Yet those same organisations have no runtime monitoring for agent behaviour, no guardrails on what data agents can access and no ability to reconstruct what an agent actually did during a suspicious session.
This is not a theoretical concern.
These gaps are exactly what enable unintentional data overwrites or corruption when guardrails are weak, malicious or unsafe actions executed through code without properly scoped tools, trade-secret exfiltration via prompt injection and unbounded exploration where agents operate outside their intended action space.
Enterprises now have to design for these risks and strike the right balance, too much control slows progress, while too little creates serious security exposure.
What they need are clear frameworks to assess and design agent security, along with a practical checklist for evaluating AI vendors.
The UK government should adopt and adapt the same controls, creating a shared foundation that strengthens both enterprise and public-sector AI systems.
To help advise on the UK Secure Strategy and give enterprises a practical framework for protecting agents, we’re introducing AWARE.
It offers concrete guidance on how to understand the problem space and how to build viable, well-designed solutions.
When securing agents, you can’t focus only on the actions they access.
Agents are fundamentally different from traditional software as they reason, plan and behave more like humans.
They cannot be given open-ended control; they need clear scopes and boundaries to operate safely.
At the same time, not everything about agents is new.
We can reuse proven practices from software security including observability, risk scoring and other operational controls.
The good news is that we now have something previous eras of software never had – the ability to use agents themselves to help secure other agents.
That gives us far more tools, beyond traditional controls, to make agent-based systems both safe and effective.
With that context in mind, I wanted to explain the AWARE framework and how it can be applied in the public sector for protecting the UK using illustrative examples:
A – Actor intent
W – Work context
A- Autonomous guardrails
R – Real‑time risk scoring and blocking
E – Ecosystem and observability
The DSIT–NCSC call for information explicitly seeks “defence‑in‑depth protection… with resilience to novel and adaptive threats” and invites ideas on how solutions can be evaluated, compared and assured.
To meet that ambition, I believe the UK should:
The UK has already signalled its intent to be a trusted home for frontier AI development and deployment but the next step is to ensure that trust extends all the way up the stack to the agents that actually touch processes, people and critical services.
If we get this right, the UK will have AI systems that can be trusted to act autonomously in the national interest.