A new report from Netwrix has found that one in three organisations globally adapted its security architecture to address AI- driven threats.
Netwrix, a cybersecurity provider focused on data and identity threats, announced the release of its annual global 2025 Cybersecurity Trends Report based on a global survey of 2,150 IT and security professionals from 121 countries.
According to Netwrix’s report, the top security challenges introduced by AI include: new threats, new attack surface and new compliance requirements.
Additionally 60% of organisations are already using AI in their IT infrastructure and 30% are considering implementing AI.
The research shows that AI has impacted organisations’ security posture. 37% of respondents said that new AI-driven threats forced them to adjust their security approach, 30% report the emergence of a new attack surface due to the use of AI by their business users and 29% struggle with compliance since auditors require proof of data security and privacy in AI-based systems.
In this year’s survey, Netwrix investigated incidents that demanded a dedicated response from security teams, rather than those that were automatically detected and remediated.
Based on this definition, 51% of respondents confirmed experiencing a security incident in the past 12 months.
The number of organisations reporting no impact from security incidents declined from 45% in 2023 to 36% in 2025.
75% of respondents reported financial damage due to attacks — a considerable increase from 60% in 2024.
The number of organisations estimating their damage at $200,000 or more nearly doubled, from 7% to 13%.
Jeff Waren, Chief Product Officer, Netwrix commented: “Today’s AI-driven business processes are vulnerable to a host of new threats that security teams must be prepared for.”
Waren continued: “The data shows a rise in security incidents that are identity-driven and infrastructure-focused. Indeed, identity-driven attacks are likely to dominate even more, with crafty new ways to bypass MFA, abuse of machine-to-machine identities like service accounts and tokens, AI-powered deepfake voice and video phishing and even synthetic identity creation at scale.”
Dirk Schrader, VP of Security Research, Netwrix said: “AI workloads trained on proprietary enterprise data represents intellectual property and are attractive targets for cyber-criminals”
Schrader articulated: “It is important to secure data across the entire AI lifecycle, from ingestion to model training to monitoring API endpoints for any signs of prompt injection, abuse or model leakage.
“Finally, security teams should apply Zero Trust principles in the world of AI: assume every interaction with the AI system, internal or external, could be malicious and enforce strict authentication, least privilege access and continuous monitoring.”
Warren added: “Direct breach costs are well understood, but more subtle costs include intellectual property loss, product development delays and reputational damage, which are all hard to quantify but can be devastating, especially if innovation is essential to the business model.
“Breaches damage brand trust and customer churn often peaks when the time comes to renew the contract – well after the immediate crisis seems resolved.”