Microsoft 365 Header

AI in Cybersecurity: Insights and Perspectives from CISOs

Written by Romain Basset / 09.07.2025 /
Home » Blog » AI in Cybersecurity: Insights and Perspectives from CISOs

A closer look at Real-World experiences, challenges and opportunities 

Evaluating new tools, emerging technologies, and novel use cases is part of a CISO’s DNA. Constantly balancing innovation with risk, CISOs and security teams must navigate a fast-changing landscape while protecting their organizations. 

To better understand how CISOs and IT leaders are truly experiencing and perceiving artificial intelligence (AI) in cybersecurity, we decided to turn the tables. Rather than “pushing our AI solutions” like many vendors do, we asked CISOs directly about their honest thoughts, real-world challenges, and actual successes with AI. 

We conducted in-depth interviews with three CISOs from diverse sectors and regions: a security leader at a 700-employee tech company in Germany; a CISO at a major finance and compliance firm in the United States; and a virtual CISO serving multiple SMBs and startups in France. 

In addition, we gathered feedback from a broader panel of CISOs worldwide to provide a comprehensive, global perspective on AI’s evolving role in information security

This report distills their insights to offer practical takeaways for organizations looking to leverage AI effectively, grounded in the reality of those who live it every day. 

Current Usage of AI Among CISOs 

CISOs interviewed report a wide range of approaches to AI adoption within their organizations. While a few advocate for broad integration, the majority indicate that AI is currently limited to non-critical tasks or that its use is left up to end users. 

Some organizations are beginning to centralize governance, introducing policies and internal tools to regain control, while others maintain strict restrictions due to compliance or ethical concerns. 

As one CISO from a financial institution noted, “We’re seeing adoption as high as 75%+ within our org over the last 2 years.” In contrast, a virtual CISO shared, “Two years ago it was open bar on all AI services. This past year, we’ve started putting in more processes and guidelines, along with internal LLMs”.   

Shadow IT has been and will continue to be a major concern for security teams moving forward. AI safety concerns, if anything, have amplified shadow IT dangers. 

You are currently viewing a placeholder content from Youtube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

In discussions with interviewed CISOs, the evolving and often fragmented nature of AI adoption across business units call for a needed effort to standardize and manage risk alongside AI adoption. Just like any new product or service. 

End-User Awareness of AI Risks 

Some say that a business is only as secure as their least-prepared end-user and CISOs polled / interviewed unanimously agreed on the importance of improving end-user awareness of AI-related risks

While some organizations have already built a strong culture around compliance and education, these are the exception rather than the rule. As one CISO explained, “When it comes to AI risk awareness, we are a 5 out of 5. Because we are a compliance and finance company.” 

However, most CISOs report much lower levels of awareness, typically just 1 or 2 out of 5 on the risk scale. Many express concerns about employees using AI tools without fully understanding the security or compliance implications. 

As a virtual CISO put it, “It’s progressing, but I don’t think people have understood the stakes, especially when they share company information in a public AI.” 

Focused and practical education efforts are needed to help end-users adopt AI responsibly, and securely. This need is clear to continue as well due to the relative young age of AI-enabled products as well as with the speed at which AI has been evolving. 

Leadership’s Understanding of AI Risks 

All new technologies carry risks, and AI is no exception. The industry has seen threat-actors leverage AI to great effect in targeted attacks. 

Those aren’t the only clear risks though. CISOs report a wide disparity in how well company leadership understands said AI-related risks. This was reflected in our poll results as well, which showed the widest spread of responses across all questions. 

While a narrow majority selected “Leadership somewhat knows about the risks”, many others indicated either strong awareness or significant gaps. This confirms that there’s no consistent baseline across organizations. 

In some cases, awareness is clearly progressing. As one CISO from a tech company shared, “Thanks to the joint efforts of the Legal and Security teams, our Management is beginning to understand the issues related to AI security.” 

But in many organizations, leadership is still primarily focused on the upside of AI, often overlooking the risks. A virtual CISO noted, “Management sees the productivity gains related to AI but doesn’t necessarily see the associated risks. It’s the CISO’s role to raise their awareness.” 

Clearly there is an uneven landscape amongst different organizations, where the level of leadership engagement can strongly influence how AI is governed and adopted and where CISOs often carry the responsibility of closing that gap. 

Emerging AI-Driven Threats 

CISOs are increasingly focused on AI misuse as a source of emerging cyber threats in the next 12 months. While multiple risks are gaining attention, our poll revealed synthetic identity fraud, using AI-generated documents or credentials, as the top concern among respondents. 

This was followed by voice-cloning fakes and video “deepfakes”, both seen as high-risk tools for impersonation and fraud, especially in the finance sector. 

Other threats raised also included data poisoning attacks (where adversarial inputs or malicious access corrupts AI models to influence AI outputs) and fraudulent job applications. 

One CISO from a tech company flagged the particular risk of model poisoning, especially in environments where code is developed internally: “We’re most concerned about model poisoning attacks as we run our own models in-house.” 

For example, a successful poisoning attack could potentially introduce backdoors or malicious code fragments into an organization’s software supply chain. This creates a clear risk for both the company and their customers. 

In another case, a virtual CISO also warned of another persistent issue: “As a CISO, the number one risk of Gen. 

AI is the potential leak of sensitive company data.” They were of course referring to sensitive company data being voluntarily input into AI tools without authorization. Once that data has left the organization, there is no longer any control over it. 

While the CISOs we talked with highlight the fact that AI is not only a powerful tool, it is also an expanding attack surface. As threat actors adapt, so too must organizational strategies for monitoring, education, and control. 

AI Usage Within Security Teams 

Also of interest during our interviews and polling was how AI is being used by security teams. 

CISOs report that AI is gradually making its way into company security operations, but adoption remains measured and deliberate. While a few organizations have embedded AI into critical workflows, most are taking a cautious, exploratory approach. 

They focus on testing tools for specific tasks such as triage, enrichment, or ticket classification before expanding further. 

In many cases, this carefulness is driven by compliance requirements, the need for transparency and control, or concerns around premature reliance on immature technologies. 

Yet some CISOs are already leveraging the value proven in other areas, as with this finance-sector CISO who shared: “AI turned out great for customer-facing ticket notes.” Ticket notes created by AI were clear, concise, and were free of any bias or potentially unprofessional wording from technicians, ensuring alignment with company standards. 

The varying approaches show a clear trend: AI is on the roadmap, but for most security teams, it is still being tested under supervision.

Challenges in Adopting AI in Cybersecurity 

CISOs and security teams face a broad spectrum of challenges when integrating AI into their cybersecurity strategies. According to our poll, the concerns are fairly spread, with a slight majority highlighting uncertainty around AI risks as the top hurdle, closely followed by compliance, legal, and regulatory issues. 

Other common obstacles include: 

  • Justifying budget and demonstrating ROI 
  • Integrating AI solutions with existing security tools 
  • Addressing the shortage of skilled personnel 
  • Securing leadership buy-in 
  • Keeping pace with rapid AI advancements 

A CISO from a tech company summed up the talent challenge succinctly: “We still lack skills and specialized experts in AI.” 

Meanwhile, a virtual CISO expressed skepticism about tangible benefits: “I feel there’s still a lack of return on investment when it comes to generative AI solutions. Detecting a port scan by reading ten lines of logs doesn’t bring much value.” 

While AI offers great promise, many organizations are still grappling with practical, organizational, and strategic barriers that slow down widespread adoption. In short, some CISOs have had targeted successes, while other security teams are still battling to squeeze ROI out of AI solutions. 

Examples of Implemented AI Technologies 

Another core area of our polling focused on how businesses have incorporated AI technologies over the past year. CISOs have led the adoption of a variety of AI technologies within their organizations. Beyond the traditional use of AI-based solutions for threat detection, many teams are exploring innovative applications to improve efficiency and user interaction. Among the practical implementations shared: 

  • “We implemented AI in our SOC for the investigation and remediation of false positives.” 
  • “We’ve been experimenting with a chatbot for our end users.” 
  • “Generative AI risk management directly in the browser.” 

It’s early days yet, but it’s good to see some tangible examples of how security teams are moving from exploration to production use cases, leveraging AI to enhance efficiency, user experience, and risk oversight. 


Supercharge Your Security Operations with AI

At Hornetsecurity, we’re transforming the cybersecurity landscape with our innovative solutions. Here is how:

  • Equip your organization with AI.MY, the AI Cyber Assistant, designed to enhance user awareness while lightening the load on your Security and Support teams.
  • Streamline routine requests, allowing your team to focus on what truly matters and address critical security challenges.
  • Experience the benefits of AI.MY as part of our comprehensive 365 Total Protection Plan 4, utilizing cutting-edge AI and machine learning for a stronger security posture.
AI.MY

Ready to revolutionize your approach to cybersecurity? Learn more and schedule a demo today!


Conclusion 

From our conversations and polling with CISOs, it’s clear that AI is rapidly becoming an essential asset in the cybersecurity landscape. While challenges around risk awareness, compliance, and skills remain, the growing experimentation and early successes demonstrate that AI’s potential is real and achievable. 

Most security teams are wisely adopting a measured approach: balancing innovation with control and laying the groundwork for broader, confident integration. With the right tools and education, AI enables CISOs to enhance threat detection, automate routine tasks, and strengthen their overall security posture. 

As a security vendor committed to empowering security teams with AI-driven solutions, we at Hornetsecurity see tremendous opportunity ahead. Together with our customers, we’re turning AI from hype into practical, scalable advantage and helping organizations protect what matters most in an ever-evolving threat landscape. 

FAQ

What are the main benefits of AI in cybersecurity? 

AI enhances threat detection, automates routine tasks, and improves operational efficiency in security. It gives organizations the ability to manage risks effectively while responding quickly to emerging cyber threats. 

What challenges do CISOs face when adopting AI? 

CISOs struggle with AI-related risk awareness, compliance issues, and skill shortages. Therefore, justifying the budget and demonstrating ROI remain significant challenges in integrating AI solutions into existing security frameworks. 

How can organizations improve end-user awareness of AI risks?   

Focused education and training programs are essential for end-users to understand AI-related risks. Organizations should cultivate a culture of compliance and regular awareness initiatives to ensure the responsible use of AI tools. 

You might also be interested in: