Adversarial Octopus

Attack Demo for AI-Driven Facial Recognition Engine




The intention behind this project

Driven by our mission to increase trust in AI, Adversa’s AI Red Team is constantly exploring new methods of assessing and protecting mission-critical AI applications.

Recently, we’ve discovered a new way of attacking Facial Recognition systems and decided to show it in practice. Our demonstration reveals that current AI-driven facial recognition tools are vulnerable to attacks that may lead to severe consequences.

There are well-known problems in facial recognition systems such as bias that could lead to fraud or even wrong prosecutions. Yet, we believe that the topic of attacks against AI systems requires much more attention. We aim to raise awareness and help enterprises and governments deal with the emerging problem of Adversarial Machine Learning.


Attack details

We’ve developed a new attack on AI-driven facial recognition systems, which can change your photo in such a way that an AI system will recognize you as a different person, in fact as anyone you want.  

It happens due to the imperfection of currently available facial recognition algorithms and AI applications in general. This type of attack may lead to dire consequences and may be used in both poisoning scenarios by subverting computer vision algorithms and evasion scenarios like making stealth deepfakes.

The new attack is able to bypass facial recognition services, applications, and APIs including the most advanced online facial recognition search engine on the planet, called PimEyes, according to the Washington Post. The main feature is that it combines various approaches together for maximum efficiency.

This attack on PimEyes was built with the following methods from our attack framework:

  • For better Transferability, it was trained on an ensemble of various facial recognition models together with random noise and blur.
  • For better Accuracy, it was designed to calculate adversarial changes for each layer of a neural network and use a random face detection frame.
  • For better Imperceptibility, it was optimized for small changes to each pixel and used special functions to smooth adversarial noise.

We follow a principle of responsible disclosure and currently making coordinated steps with organizations to protect their critical AI applications from this attack so we can’t release the exploit code publicly yet.


Attack Demo

We present an example of how PimEyes.com, the most popular search engine for public images and similar to Clearview, a commercial facial recognition database sold to law enforcement and governments, has mistaken a man for Elon Musk in the photo.

The new black-box one-shot, stealth, transferable attack is able to bypass facial recognition AI models and APIs including the most advanced online facial recognition search engine PimEye.com.

You can see a demo of the “Adversarial Octopus” targeted attack below.



Who can exploit such vulnerabilities

Uniquely, the attack is a black-box attack that was developed without any detailed knowledge of the algorithms used by the search engine, and the exploit is transferable to any AI application dealing with faces for internet services, biometric security, surveillance, law enforcement, and any other scenarios. 

The existence of such vulnerabilities in AI applications and facial recognition engines, in particular, may lead to dire consequences.

  • Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.

  • Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.

  • Terrorists or dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake identity for the virtual world we currently live in.


Where the attack is applicable

The main feature of this attack is that it’s applicable to multiple AI implementations including online APIs and physical devices. It’s constructed in a way that it can adapt to the target environment. That’s why we call it Adversarial Octopus. Besides that, it shares three important features of this smart creature.


Features

Octopuses are rightfully considered one of the smartest creatures

FIRST, the combination of various methods for attack imperceptibility reminds if its mimicry abilities. SECOND, applying target-specific changes for better transferability reminds of its cleverness and ability to adapt to the environment. THIRD, one-shot black-box targeted attack reminds of its well-planned behavior with long preparation and fast, precise action.

Background


How to protect from this attack

Unfortunately, there is no one-size-fits-all protection from such attacks due to the fundamental issues in deep learning algorithms. It’s a complex problem that involves multiple actions where each of them can reduce the risks of such attacks. AI isn’t born protected, you have to grow and train it with security in mind to make it Trusted as we grow our kids to be healthy. 

Here are the main steps to perform: 

  1. Start with adversarial testing or AI Red Teaming. It’s an analog of traditional penetration testing for AI systems. It’s an absolutely necessary thing for any AI and is similar to hand washing to prevent viruses.

  2. Develop and train AI with respect to adversarial scenarios by adversarial training, model hardening, data cleaning, and other defense methods, like training kids to do sport to stay healthy.

  3. Learn about and detect new threats for AI in critical decisions making applications by constantly following the latest inventions in adversarial machine learning, like training kids for advanced skills if they plan to be involved in extreme sports activities.


How we can help

In the wake of interest in practical solutions for ensuring AI system’s security against advanced attacks, we have developed our own technology for testing facial recognition systems for such attacks.

We are looking for early adopters and forward-thinking technology companies to partner with us on implementing adversarial testing capabilities in your SDLC and MLLC lifecycles to increase trust in your AI applications and provide your customers reliable AI service.


FAQ


Why is it important?

It’s a fundamental problem of all facial recognition algorithms, and it’s vital to ensure that AI-driven solutions are safe and trustworthy.


Why facial recognition?

According to our report “The Road to secure and Trusted AI”, the Internet industry is the most popular target for Adversarial ML attacks (29%) and Facial Recognition is one of the most attacked AI applications (2nd place) after image classification.


Why did you decide to launch this attack?

In our mission to Secure and Trusted AI, our aim was to demonstrate that the AI industry is woefully unprepared for AI regulations, at least from a security standpoint.


What are the differences from other similar attacks?

Adversarial Octopus attack is multi-functional (evasion or poisoning), it’s one-shot black box and transferable across various environments and applications and it combines various methods for higher attack accuracy. 


What are the risks posed by adversaries?

It brings huge reputational risks for businesses and identity theft risks for individuals. Criminals can collect personal information and further commit identity fraud. This could have a significant impact on your personal life including your finances.


Who should care about this?

Face recognition is one of the most popular AI technologies. It is a significant part not only of biometrics and surveillance applications but is also used in retail, finance, internet, robotics, advertising, and almost every industry.


Where is this attack applicable?

The current attack is demonstrated in a digital environment; however, the approach behind this attack is able to construct physical attacks as well, with the same method we can apply adversarial filters on physical objects like sunglasses.


How to deal with that?

  1. Start with adversarial testing it’s like QA for AI the absolutely necessary thing for any AI. It’s like hand washing to prevent viruses.
  2. Develop and train AI with respect to Adversarial scenarios like we train kids to self-defense by adversarial training, model hardening, data cleaning and other Defense methods.
  3. Analyze and Detect new threats for AIs in critical decisions making applications by constantly following the latest inventions in Adversarial Machine Learning  

Learn more in our report


Where can I learn more on this topic?

Recently, an analytical report “The road to Secure and Trusted AI” was released. It contains a detailed analysis of more than 2000 security-related research papers to describe the most common AI vulnerabilities, real-life attacks, recommendations, and predictions for the industry’s further growth.


Request Details