Header Blog - Email Security

Impact of AI on Cyber Security

Home » Blog » Impact of AI on Cyber Security

Since ChatGPT stormed into the public consciousness back in late 2022 the impact of Large Language Model (LLM) AI on business, society and cybersecurity has been nothing short of astounding. 

Early predictions of automated attacks breezing past defenses and finding vulnerabilities at lightning speed were overblown, but the impact of AI on cybersecurity has been felt in the industry, although AI has arguably helped defenders more.  

That’s going to change over the next few years, and the National Cyber Security Centre (NCSC) in the UK recently published their Impact of AI on cyber threat from now to 2027 assessment which looks at the risks of AI enabled cyberattacks against businesses and Critical Nation Infrastructure (CNI). 

Whilst focused on the UK risk, cyber-attacks know no borders, and the advice applies to any organization in Europe or beyond.  

In this article we’ll look at:

  • the various areas where we have evidence of attackers using AI in their attacks,
  • how you create or adapt your cybersecurity strategy to take AI powered threats into consideration,
  • and how you can use Hornetsecurity’s Advanced Threat Protection to defend your users.  

Understanding the Impact of AI on Cyber Threats 

Lack of Transparency in Criminal Use of AI

Cyber criminals and nation-state cyber spies don’t exactly publish reports of how they’ve successfully implemented AI in their workflows so us defenders have had to rely on various indirect evidence to see how and in which ways AI affects their tactics.  

OpenAI and Anthropic (and others) have published reports over the last few years where they detailed adversaries using their AI tools for various cyber related tasks. 

Early Use Cases: Coding Assistance

Early evidence showed coding assistance which makes sense, non-criminal developers were one of the first to use AI to be more efficient. Both for business applications and malware this is coding assistance, not “describe the application and have the AI create it completely on its own”. 

AI for Research and Information Gathering

There was also evidence of using AI to search the web for technical facts or vulnerability research, something where a normal search engine would work just as well and not land you in an OpenAI report (looking at you, Iran).  

Cybersecurity Report 2025

Cybersecurity Report 2025

An In-Depth Analysis of the Microsoft 365 Threat Landscape Based on Insights from 55.6 Billion Emails

AI-Enhanced Phishing Campaigns

In the most common attack vector: phishing emails (and Teams/Slack messages) LLMs are used to:

  • produce flawless grammar,
  • more psychologically enticing lures,
  • and translating attacks into languages where users are less used to phishing attacks (Japan comes to mind).  

North Korean AI-Powered Scams

There was an interesting finding in OpenAI’s recent report about a North Korean scam, initially targeting US companies hiring remote tech / developers. They utilize laptop farms that they connect remotely to, so as to appear to be working in the right state / country. Their objective is threefold:

  • First, the developers produce work and thus get paid salaries that can be fed back to the regime. 
  • Secondly, tech workers often have privileged access which can be used to steal intellectual property, or in the case of cryptocurrency companies, steal funds.
  • Finally, if the ruse looks like it’s about to be discovered, traditional ransomware can be deployed for a potential payout.  

US law enforcement has been cracking down on these operations and thus North Korea is branching out to Europe. The recent OpenAI report showed something new however, where North Korean attackers had automated the whole workflow for their IT workers scam. 

Automation of Fraudulent Job Applications

They’re using AI to automatically create resumes that are aligned to job descriptions, generating consistent work histories, references and educational backgrounds. AI is also used to:

  • track and manage job applications,
  • complete job application tasks such as interview questions,
  • solve coding assignments. 

Note that this isn’t something they couldn’t do without AI, just that the automation provides more efficiency and scale, which is a sign of the impact of ai on cybersecurity.  

AI Generated Personas

Anthropic’s (makers of Claude) report looked at efforts to automate influence-as-a-service by generating social media posts, and also responses to other’s posts. The AI generated personas were aligned in their political views, and image generation by AI was also incorporated. 

Overall, the system sounds good in theory but had limited viral impact.  

Rise of in-house AI

Note that the window where we have visibility into the usage of publicly available AI tools in these ways is closing with the availability of open weights models from DeepSeek and others that rival the power of the latest frontier models, and it’s likely that future AI threats will be built using in-house AI that we won’t have visibility into.  

Looking Ahead: AI will only get more dangerous

Something that AI advocates often point out is that “this is the worst AI will ever be”, meaning that as the technology develops and becomes more capable, it’ll only continue to get better. 

So, today’s failed influence operations might be tomorrow’s successful interference with a democratic election, or today’s finding of “easy” vulnerabilities, might be tomorrow’s automated zero-day attack factory.  

Key Judgments from the NCSC Assessment  

The recent assessment from NCSC builds on one from early 2024, and has a longer time horizon, looking at threats from now to 2027. The assessment is easy to read, with the following key points.

Expected Impacts of AI on Cybersecurity:

  • AI will continue to make parts of cyber attacks more effective and efficient, leading to more attacks. 
  • There will be a divide between systems that keep pace and have AI defenses built in, and those that don’t and thus become increasingly vulnerable.  
  • This might (barring significant changes in mitigations) lead to critical systems becoming more vulnerable to advanced threat actors by 2027 and thus keeping pace with frontier AI developments will be a requirement for strong defenses.  
  • Proliferation of AI enabled cyber tools will lead to an expanded range of state and non-state actors having access to AI powered intrusion tools.  
  • The use of AI systems across society will lead to an increased attack surface for adversaries to exploit.  
  • Insufficient cyber security defenses will increase the opportunity for AI powered attackers to successfully attack businesses and organizations.  

Role of State and Criminal Actors

The assessment expects state actors to develop AI powered tools and intrusion techniques initially, followed by criminals, something we’ve seen over the last couple of decades with other tools and techniques.  

Vulnerability Research and Exploit Development

One area that’s covered is vulnerability research and exploit development (VRED) and we’ve seen some examples of LLM based AI tools being able to find “simple” vulnerabilities by itself in code.  

Human Involvement Still Necessary

NCSC doesn’t expect fully AI automated attacks by 2027, with human cyber actors still needed, but automation of various phases for efficiency and speed are likely.  

Strategies for Cyber Resilience Against AI Threats 

Basic cyber hygiene still matters for every business out there, to combat the impact of AI on cybersecurity, and traditional attacks as well.  

Strengthening Core Systems

  • Patch software systems, track vulnerabilities and prioritize critical systems / those that store sensitive data. 

Enforcing Strong Authentication

  • Implement phishing resistant MFA (Windows Hello for Business, FIDO hardware keys or passkeys) for all users to ensure strong authentication.  
  • Track non-human / machine identities as they’re being used in your infrastructure and apply policies to these authentications too.  

Securing Communication Channels

Adopting a Zero Trust Approach

  • Apply a Zero Trust approach to cyber resiliency, verify each connection explicitly, assign least privilege access and assume breach. The last one is important, as it requires you to adapt monitoring and alerting so that when an attacker gains a foothold they are spotted, and the attack can be mitigated early.  

Managing Third-Party and Supply Chain Risks

  • Watch your suppliers and their supply chain, even if you have your house in order, you might be compromised through another business that you rely on.  

Building Human Resilience

  • Train your users on AI-related threats, preferably using a tool with low administrative overhead that relies on short, repeated training sessions to enhance your “human firewalls” and hone their ability to spot social engineering.  

Ensuring Data Protection and Recovery

  • Have tested backups of all relevant corporate data, stored on immutable storage so that adversaries can’t corrupt or delete your backups to force you to pay in the case of a ransomware attack.  
  • Test and rehearse your incident response plans so your staff (at all levels of the organization) know what to do when the inevitable attack slips through your defenses.  

Preventing Data Leakage

  • Use a tool such as AI Recipient Validation to ensure users don’t send sensitive information to the wrong people.  

Maintain an Edge Over Ai-Enhanced Cyber Threats Through the Assistance of ATP 

Is your organization ready to tackle the ever-evolving landscape of cyber threats accelerated by AI? Don’t risk your security on uncertainty. Implement leading-edge strategies, such as Advanced Threat Protection by Hornetsecurity, and engage in proactive risk management to effectively combat AI-enabled attacks. 

Advanced Threat Protection icon

Reach out today to learn how you can strengthen your defenses and safeguard your vital assets. 


Conclusion 

We’ve only seen the beginnings of the impact of AI on cybersecurity, and as NCSC points out, unless you’re aware of the risks and mitigating them accordingly, there’s a strong likelihood that your defenses won’t be adequate. 

Hornetsecurity’s Advanced Threat Protection is an AI and ML based defense tool that catches what others miss and an essential part of defending your users, against both traditional and AI powered threats.  

FAQ

How is AI affecting cyber threats? 

AI enhances attack efficiency, enabling cybercriminals to automate tasks, improve phishing tactics, and exploit vulnerabilities more quickly than before. 

What should organizations do to prepare for AI-driven cyber threats?   

Implement basic cyber hygiene, use phishing-resistant MFA, apply Zero Trust principles, and train users to recognize AI-related threats proactively. 

Will AI completely automate cyber-attacks in the future? 

While AI will automate some processes, human involvement will still be necessary for many aspects of cyber-attacks, particularly strategic planning and execution. Hornetsecurity’s Advanced Threat Protection is a powerful AI and ML-based defense tool that effectively identifies threats that others overlook. It is a crucial component in safeguarding your users from both traditional and emerging AI-powered threats.

You might also be interested in: