
What CISOs Really Think about AI, Ransomware 3.0, and the New Rules of Cyber Risk
Ransomware hasn’t faded into the background; it has evolved into Ransomware 3.0. AI-powered tools now help attackers automate phishing, speed up credential theft, and pivot across endpoints at a scale that was unthinkable a few years ago. For security leaders, this isn’t just another spike in incidents – it’s a structural change in how security risks materialize across email, identity, and cloud workloads.
Threat actors are using machine learning to scan for weaknesses, generate highly convincing lures in any language, and chain intrusions together with very little human effort. The playbook is becoming faster, more adaptive, and more personalized to each target. To many CISOs, the result feels uncomfortably familiar – the same ransomware story but supercharged by AI and harder to spot in the noise of daily operations.
In this article, we unpack how ransomware is changing, how AI is reshaping both offense and defense, and what our latest CSR 2026 data reveals about real-world security concerns. We’ll look at where AI is already paying off for defenders, where it quietly increases exposure, and how insights from CISOS can help you build resilience instead of just reacting to the next headline breach.
The Ransomware Resurgence of 2025
After three consecutive years of decline, ransomware has returned to the forefront of cybersecurity concerns. Hornetsecurity data shows that in 2025, 24% of organizations reported being victims of a ransomware attack, up sharply from 18.6% in 2024. This reversal shows a flashing red light in the post-pandemic threat landscape and a warning that attackers are evolving with increasing speed.
Despite years of awareness campaigns and training programs, ransomware remains a critical business risk precisely because it adapts to our defenses. Threat actors are now combining AI-enhanced automation with tried-and-true social engineering to achieve greater reach, precision, and persistence.
Automation, AI, and the New Ransomware Playbook
Attackers are increasingly leveraging generative AI and automation to identify vulnerabilities, craft more convincing phishing lures, and orchestrate multi-stage intrusions with minimal human oversight. This sadly makes ransomware operations more scalable, and more personal.
Some key data points:
- 61% of CISOs believe that AI has directly increased the risk of ransomware attacks.
- 77% identify AI-generated phishing as an emerging and serious threat.
- 68% are now investing in AI-powered detection and protection capabilities.
The result is an arms race where both sides are using machine learning. For one side the goal is to deceive, the other to defend.
Entry Points: Phishing Loses Ground, Endpoints Rise
While phishing remains the leading infection vector at 46% of those surveyed, its dominance is slipping. Attackers are diversifying:
| Vector | 2024 | 2025 | Δ |
|---|---|---|---|
| Phishing / Email-based | 52.3% | 46% | –6.3 pp |
| Compromised Credentials | ~20% | ~25% | +5 pp |
| Exploited Vulnerabilities | – | 12% | n/a |
| Endpoint Compromise | – | 26% | n/a |
The data shows a clear pivot toward credential theft and endpoint compromise, particularly in hybrid and remote work environments where BYOD and patch gaps remain widespread. Ransomware is no longer just an email problem; it’s an ecosystem problem.
Training Fatigue and the “False Compliance” Trap
Organizations are still investing heavily in awareness training. 74% offer it but 42% of those feel it’s inadequate.
Many programs remain checkbox exercises: annual, unengaging, and quickly forgotten. The result is what Hornetsecurity terms “false compliance”. This is the illusion of preparedness without meaningful behavioral change.
Small and mid-sized businesses (SMBs) are hit hardest. Many operate with minimal IT staffing and outdated infrastructure, relying on outsourced providers or unpatched cloud tenants. While more SMBs report having a DR plan, readiness on paper doesn’t always translate into resilience in practice.
Recovery and Resilience: The Silver Lining
That said, even as attacks increase, recovery capabilities are quietly improving:
- 62% of organizations now use immutable backup technologies. These are systems where data cannot be altered or encrypted once the data is written. Not even by administrators or a compromised admin account during an attack.
- 82% have implemented a Disaster Recovery Plan, which is quickly becoming the new baseline for operational resilience.
- Also in good news, only 13% of victims paid the ransom in 2025, down from 16.3% in 2024.
The message is clear: organizations are learning to recover without negotiating.
Insurance, however, tells a different story. Ransomware insurance coverage dropped from 54.6% in 2024 to 46% this year, as premiums and exclusions rose and confidence in payouts declined. This market correction suggests that organizations can no longer outsource risk. They must architect security into their systems and build resilience into their culture.
Governance: Strategy Still Lags Behind Threat Reality
Cybersecurity is now a board-level concern, but many organizations are still catching up to the operational demands of ransomware-era governance. Few boards run cyber crisis simulations, and cross-functional playbooks remain the exception rather than the rule.
As AI-driven misinformation and deepfake extortion become more plausible, communication readiness is now part of cybersecurity and, thankfully, not a PR afterthought.
Outlook: Resilience Is Rising, But So Are the Threats
The 2025 data paints a nuanced picture: ransomware attacks are increasing, but so is our capacity to recover. The organizations that will weather this new wave are those that treat resilience as strategy, not compliance.
Immutable backups, well-tested recovery plans, and meaningful user training are no longer optional, they’re the minimum viable defense.
Attackers don’t stand still, and neither can defenders. The challenge for 2026 won’t be preventing ransomware altogether, it will be making sure that when it hits, business continuity doesn’t fail.
CISO Perspectives: Balancing AI Promise and Peril
Artificial Intelligence is reshaping cybersecurity, and not just as a defensive tool, but as a strategic question. Hornetsecurity’s 2025 CISO Insights Poll set out to capture how real-world security leaders are approaching AI: where it’s working, where it’s risky, and what challenges stand in the way of responsible adoption.
The findings reveal a complex picture. CISOs are enthusiastic, cautious, and in many cases, still experimenting. AI is everywhere but trust, governance, and understanding have, sadly, not yet caught up.
Adoption: Rapid Growth, Uneven Governance
Most CISOs surveyed report significant experimentation with AI, but structured adoption remains rare. Some organizations are integrating AI into workflows such as triage, enrichment, and ticket management, while others restrict its use entirely.
A CISO from a global finance firm noted,
We’re seeing adoption as high as 75%+ within our organization over the last two years.
In contrast, a virtual CISO remarked,
Two years ago it was open bar on all AI services. This past year, we’ve started putting in more processes and internal LLMs.
The variability shows the core challenge: AI adoption is moving faster than AI governance, like previous innovative trends in the tech space. Many leaders have begun to centralize control and develop internal tools, but others remain in a reactive posture and are chasing compliance rather than leading innovation.
Shadow IT, once a known irritant, has been redefined by AI into Shadow AI. Unapproved tools, browser extensions, and SaaS integrations are creating new, opaque risks. As one CISO summarized,
AI safety concerns have amplified the dangers of shadow IT.
End-User Awareness: The New Human Risk Factor
If a company is only as strong as its least prepared employee, AI has lowered that bar. CISOs unanimously agree that end-user awareness of AI risk is dangerously low. While a few organizations boast strong compliance cultures with some scoring themselves “5 out of 5”, most CISOs estimate awareness levels closer to “1 or 2 out of 5.”
The primary issue? Employees enthusiastically using public AI tools without realizing the security or compliance implications. As one virtual CISO put it,
People haven’t understood the stakes, especially when they share company information in a public AI.
The consensus: security awareness efforts in-house haven’t evolved at the same pace as AI adoption. Focused, scenario-based education is now as important as firewalls and filters.
Leadership Understanding: The Awareness Gap at the Top
CISOs also highlight a wide disparity in leadership understanding of AI-related risks. Our polling revealed the broadest spread of responses across this question, ranging from “deep awareness” to “no real understanding.” The median answer was a luke-warm “leadership somewhat knows the risks”. It’s clear that progress is inconsistent and varies widely from business to business.
Some organizations are moving forward collaboratively. A German tech-sector CISO credited joint Legal and Security initiatives for progress:
Management is beginning to understand the issues related to AI security.
Others, however, report the opposite.
Management sees the productivity gains but not the risks,
one virtual CISO said.
This uneven awareness leaves CISOs with dual responsibility: defending against external threats while educating leadership internally.
Emerging Threats: Deepfakes, Model Poisoning, and Data Leaks
Nearly all CISOs surveyed agree that AI misuse will be a major source of cyber risk over the next 12 months.
The most pressing concerns include:
- Synthetic identity fraud using AI-generated documents or credentials.
- Voice cloning and deepfake videos used for impersonation and fraud.
- Model poisoning, where malicious data corrupts internal AI systems.
- Sensitive data leakage through employee misuse of public AI tools.
One CISO warned,
We’re most concerned about model poisoning attacks as we run our own models in-house.
Another noted that
The number one risk of AI is the voluntary leak of company data into public systems.
AI has become both a tool and a target and the attack surface is clearly expanding faster than many realize.
Security Team Adoption: Careful, Controlled, and Tactical
Within security operations, AI adoption is measured but growing. CISOs describe limited deployments focused on specific, low-risk tasks. For instance, classifying tickets or enriching threat data. One finance-sector CISO shared a practical success story:
AI turned out great for customer-facing ticket notes. They’re concise and bias-free.
This “cautious optimism” is characteristic of 2025. Security teams are embracing automation but remain wary of overreliance on opaque systems or immature models.
Challenges in Implementation: The Practical Barriers
The path to responsible AI adoption is far from smooth. Our CISO poll found that the top barriers include:
- Uncertainty around AI risks and potential misuse
- Compliance and legal constraints
- Budget justification and ROI demonstration
- Integration challenges with legacy tools
- Talent shortages in AI and data science
- Leadership buy-in
As one CISO summarized,
We still lack skills and specialized experts in AI.
Another added,
Detecting a port scan by reading ten lines of logs doesn’t bring much value.
Despite the hurdles, CISOs remain pragmatic: AI isn’t hype, it’s an inevitable evolution. But adoption will remain on a case-by-case basis until transparency, skills, and governance catch up with ambition.
From Curiosity to Capability
AI in cybersecurity is no longer experimental, but neither is it fully mature. Across industries, the focus is shifting from “What can AI do?” to “How do we govern it?”
The coming year will define whether security teams can transform AI from a risk into a reliable ally.
Turn security risks into a resilience advantage
Hornetsecurity’s 365 Total Backup and VM Backup give you immutable protection for Microsoft 365, virtual machines, and other critical workloads – even when AI‑enhanced ransomware and broader AI security risks try to encrypt or corrupt your environment. Instead of gambling on decryption keys, you get fast, predictable recovery that keeps your business moving.

When your data is safe, your business stays online. With one platform covering Microsoft 365 mailboxes and virtual infrastructures, you benefit from a comprehensive backup and recovery solution that’s repeatedly proven to be one of the easiest to use, most robust, and most cost-effective VM backup options on the market. Schedule a demo to see how quickly you can harden your recovery posture.

Conclusion on CISO concerns: New rules for security risks
Today’s ransomware ecosystem reveals a hard truth: attackers innovate continuously, while defenders often struggle to keep pace.
AI-enhanced intrusions, expanding attack surfaces, underperforming training programs, and leadership awareness gaps are reshaping what cyber risk looks like. Yet there’s also good news. Organizations are getting better at recovering, not panicking.
Immutable backups, tested recovery procedures, and more mature resilience strategies are helping businesses survive attacks without paying ransoms.
The new mission is clear: treat resilience as a strategic pillar, not a compliance checkbox. The organizations that thrive will be the ones prepared to operate even when an attack breaks through.
FAQ
Attackers are using automation and AI to scale operations, craft better phishing lures, and identify weak points faster. Combined with growing reliance on cloud services and remote endpoints, this creates more opportunities for compromise.
Many programs are outdated, static, infrequent, and not designed to handle AI-generated phishing or modern social engineering. CISOs report that employees often feel overwhelmed or simply forget what they learned, creating a false sense of readiness.
Security leaders consistently point to deepfake-enabled fraud, synthetic identities, model poisoning of internal systems, and sensitive data leaking into public tools as the most worrying AI security risk. These threats blur the line between technical compromise and manipulation of people and processes, expanding the attack surface far beyond traditional phishing and malware.
