Artificial Intelligence (AI) attacks are not quite at the level of Skynet depicted in the 1984 film “The Terminator,” but this hasn’t stopped the speculations and real-world implications around AI technological advances.

Although the science fiction within “The Terminator” story is farfetched, we have now seen the rise in Artificial Intelligence attacks and attacks on AI systems.

Let’s dive into the vulnerabilities of Artificial Intelligence and Machine Learning models to understand how we can protect that Achilles’ heel.

What Is Artificial Intelligence?

Artificial Intelligence combines computer science and complex datasets. In conjunction with machine learning (ML) and deep learning, AI algorithms can take input information to make predictions and classifications based on the data it can access.

The term intelligence within AI directly relates to these computer programs’ ability to mimic human characteristics where we can reason, discover, generalize, and learn from past experiences. Then, apply adjustments with artificial intelligence to improve or change actions based on these learned experiences.

What’s the Role of AI/ML in Cybersecurity?

The primary role of artificial intelligence in data security and cyber security is the ability to process large amounts of logging and monitoring data to find anomalies, make recommendations or adjustments to security controls.

Something that human intelligence can miss is specific patterns and identification of potential threats; AI rapidly improves the accuracy of these detections to filter out false positives.

This allows Security Operations Centre (SOC) teams to focus on critical and filtered alerts, increasing the ability to respond to attacks and implement security improvements rapidly. With this susceptibility to Artificial Intelligence attacks, cybersecurity teams must collaborate with development teams and leverage available technologies.

This level of autonomy for preventative measures and detection presents the counter to defense and offense. These same tools, models, and algorithms are also utilized for malicious intent, which has significantly increased across all levels of business.

Types of AI Attacks & the Dangers of Cyber Attacks on AI

Generative AI is one of the fastest-growing applications of AI models, which is the creative generation of text, images, videos, and music.

With these improvements to generative AI, attacks have been leveraging these models to create convincing content for use with malware code, phishing emails, fraud, and voice/video impersonation scams.

As the demand for AI models increases across business and consumer environments, the attack layer of these services drastically amplifies. You can take traditional labor-intensive roles and use machine learning to reduce overhead and improve these services.

As the prevalence of AI increases, so does the defense against these types of attacks on AI. The following sections outline some of the more common methods of Artificial Intelligence attacks:

Types of AI Attacks & the Dangers of Cyber Attacks on AI

AI uses training data and input information to grow and adapt responses and complexity to output information. An Artificial Intelligence poisoning attack occurs when the training data is intentionally tampered with or injected with malicious data.

This affects the output of the AI, which can cause incorrect, false, or even highly offensive responses or results.

One example of this type of Artificial Intelligence attack was that of “Tay,” an early version chatbot created by Microsoft in 2016 as an experiment. Unfortunately, the chatbot data was poisoned with far-right information and ideologies by people on the internet, which caused the AI to start responding with extremist remarks.

Within 24 hours, the chatbot was taken down after it went on a tirade of over 96,000 tweets, many of them highly offensive tweets.

Inference

AI systems are trained on large datasets, often containing names, addresses, birthdays, passwords, payment cards, health information, phone numbers, and other forms of sensitive information.

An inference attack aims to reveal sensitive information by probing the machine learning model, reviewing the response, and altering the prompts to get the system to reveal this sensitive data.

Membership Inference (MI) is when the attacker attempts to rebuild the training data for exploitation. They will run the records through a machine-learning model to determine if they belong to a training dataset.

In most cases, the machine learning model will output a more robust confidence response when provided with training data instead of unknown or new data.

The other type of inference attack is Attribute Inference (AI), where the attacker has some knowledge of the training records or datasets and exploits this to expose missing attributes.

In addition, Approximate Attribute Inference (AA) aims to find values close to the target attributes. This attack becomes more successful when the target machine learning has been overfitted, meaning that the machine learning model hasn’t been given enough training data or if the training data has become stale.

As AI models have improved in countering this type of exploit, attackers have combined both methods; these types of Artificial Intelligence attacks are called Strong Membership Inference (SMI).

Where membership influence tends to confuse member examples and non-members with similar attributes, the SMI attack can tell the difference between a member and a non-member if they are identical. Although significantly more complicated, this method can be hit or miss.

Evasion

An Evasion attack occurs when the machine learning model is injected with an “adversarial example”; the input data is carefully altered to look like the expected data but with tampered information to throw off the classifier.

The goal is to create a blind spot for classification errors. For example, images of stop signs could be injected with alterations to classify them as something else. When the AI interprets the stop signal input, it incorrectly classifies the object.

Businesses and companies usually targeted by these types of Artificial Intelligence attacks are driverless automobile manufacturers.

Extraction

Model extraction attacks are one of the more prominent attacks on Artificial Intelligence. The attack aims to target the machine learning models specifically to try and extract the training data.

Other attack methods, such as inference, are used to probe the data and extra as much of this data to exploit further.

Why do Artificial Intelligence Attacks Exist?

Artificial Intelligence attacks exist simply because of how they can be exploited. Unlike traditional cyber security attacks, the underlying Machine Learning models and algorithms are susceptible to a broader range of attacks.

This usually doesn’t directly correlate to the development of the AI models, more so the shortcomings of the current state of the AI landscape and advancement of attack methods.

With anything built around adaptive learning, be it Artificial Intelligence or even humans, there is always an opportunity for incorrect information to be seeded and exploited to coerce the target. Although humans can generally interpret data more dynamically, AI is purely logical.

This hasn’t prevented the speed at which AI has evolved, even though at least 5 AI models have now successfully beaten the “Turing Test.” The first to beat the “Turing Test” model was Eugene Goostman in 2012, a chatbot presented as a 13-year-old Ukrainian boy. It convinced 29% of the judges on the “Turing Test” that it was human.

How to Prevent and Recover from AI Attacks?

As with any proactive defense mechanism, it’s diversity in the approach that will provide the best outcome—implementing multiple controls to protect against Artificial Intelligence attacks and to detect when an attack has occurred.

Dynamic review and assessment of the training and source data is a crucial area to focus on. This is one of the most significant weaknesses of any AI; if the information used for the model to build its responses has been compromised, the impact and recovery efforts are relatively high.

Continuous data security analysis and improvement throughout the development lifecycle will reduce the likelihood of poisoning. Reviewing and revalidating systems, features, and components with any development process is rudimentary.

Dynamic cybersecurity risk assessments are performed by a third party with the capability to review and interpret AI systems. Having an unbiased partner review the infrastructure and AI systems can ensure that not only business requirements are met but also technical requirements.

The AI field is rapidly growing, with new attacks, vulnerabilities, and exploits emerging daily. Ensuring collaboration between cybersecurity teams, developers, and data engineers is critical to maintaining a healthy lifecycle.

Countering Attacks on AI in the Future

The focus on countering Artificial Intelligence attacks is increasing as the demand for AI technologies and services hits mainstream business.

The primary method of combating attacks on AI in the future is laying down the foundations and framework to maintain implementation governance.

The European Telecommunications Standards Institute (ETSI) Industry Specification Group and The European Union Agency for Cyber Security (ENISA) have both developed framework and technical standards: Securing Artificial Intelligence (ETSI: ISG SAI) and Framework for AI Cybersecurity Best Practices (ENISA: FAICP).

These standards have a crucial role in improving the security of existing and new AI technologies. These standards address three aspects of AI:

  • Securing AI from attack: AI is a system component, and underlying infrastructure requires adequate protection;
  • Mitigating against malicious AI: Enhance and improve conventional attack layers where AI is used;
  • Using AI to enhance security measures: Protect systems against attacks using AI as part of the solution or countermeasures.

As AI technology continues to improve, so must our governance and tooling; this will significantly reduce the risk of Artificial Intelligence attacks.

In anticipation of the future, Hornetsecurity recognizes the growing prevalence of AI attacks, poised to pose daily challenges for professionals. We recommend exploring our annual Cyber Security Report, which offers a thorough analysis of the Microsoft 365 threat landscape.

This comprehensive report is crafted from meticulous real-world data collection and study conducted by Hornetsecurity’s dedicated Security Lab team.

To properly protect your cyber environment, use Hornetsecurity Advanced Threat Protection, and Security Awareness Service to secure your critical data.

We work hard perpetually to give our customers confidence in their Spam & Malware Protection, Email Encryption, and Email Archiving strategies.

To keep up with the latest articles and practices, visit our Hornetsecurity blog now. Until the next one, hasta la vista, baby.

Conclusion

The undeniable power of AI is accompanied by a critical vulnerability—its susceptibility to cyber-attacks. Often referred to as AI’s Achilles’ heel, this inherent weakness demands rigorous cybersecurity measures.

AI is here to stay, and will influence cyber security, both offense and defense for the foreseeable future. This article covered many of the ways that AI systems can be attacked and subverted, just like with any technology we use in business, don’t assume that AI is safe without paying attention.

As we embrace the benefits of artificial intelligence, safeguarding against malicious exploits becomes imperative for the integrity and reliability of AI systems in an increasingly interconnected world.

FAQ

What is an example of an AI attack?

There are many types of Artificial Intelligence attacks, but the adversarial example is a good one. An attacker could take an image of a dog, apply digital camouflage invisible to the human eye over the top of the original image, and then re-classify the dog as a cat. It seems innocent enough except when considering the use case of traffic lights, stop signs, and speed limits for driverless cars.

How is artificial intelligence used in cyber-attacks?

Generative AI is the primary AI model used to create content for cyber-attack campaigns, such as phishing, impersonation for social engineering, and malware code generation.

How is AI a threat to security?

The improvements and accessibility of generative Artificial Intelligence have changed the cybersecurity landscape. As much as AI can generate tests, images, music, and videos, so can it create malware. Some examples seen are:

  • Automated malware;
  • Cyber-attack optimization;
  • Bot vs Bot attacks;
  • Intrusion probing;
  • Physical safety (Autonomous cars, infrastructure, etc.).

What are some examples of AI in cyber security?

  • Malware and phishing detection;
  • Task Automation;
  • Intrusion detection and prevention;
  • Breach risk prediction;
  • Knowledge consolidation;
  • Detection and prioritization of new threats.

Will AI manipulate humans?

In short, yes, we have already seen this with deep fake videos and AI-generated phishing campaigns. As technology improves, so do the possibilities of human manipulation.