AI Red Teaming: Hacking Facial Recognition



The intention behind this project 

Nowadays artificial intelligence technologies are gaining popularity, and facial recognition systems have become essential applications of AI. The real threat to them is the so-called adversarial attacks. Such attacks alter the original face so that the computer cannot recognize it, confusing it with a different person. The feature of these attacks is that the human eye does not identify an obvious falsification, and both faces look almost identical. Previously such imperceptible attacks were only demonstrated in digital world.

Driven by the idea of making artificial intelligence secure and trustworthy, Adversa.AI red team is constantly exploring new methods of assessing and defending mission-critical AI applications. Earlier they presented a new imperceptible and transferable way of attacking AI facial recognition systems  in the digital world. That research attracted much attention in the media and raised a number of questions such as if such methods are transferable to the physical world and if it is possible to make a physical attack stealthy and universal.

We decided to demonstrate it in practice to warn and help enterprises and governments deal with the emerging topic of adversarial attacks in the physical world.

 

 

Attack description

With help of our previously demonstrated Adversarial Octopus framework, we developed a new attack on physical AI-driven facial recognition systems, which makes an AI system recognize you as a different person, in fact as anyone you want. During our research we analyzed various  forms, methods, and environment conditions to  find an adversarial form which will  have the best combination of misclassification rate, universality to transfer  and good  imperceptibility.   

As a result, we created physical glasses that:

  • work in a physical environment even if a person moves, which is important in liveness detection biometric solutions;
  • transfer to multiple AI models by targeting universal vulnerabilities of deep learning architecture not a particular model;
  • have monochrome texture of the pattern as opposed to previously published acid-like patches, which are suspicious and easy to detect.

 

 

We presented an example of how a random person can wear special glasses and be recognized by the most common open-source AI facial recognition algorithms as Elon Musk. 

As a result of our research,  the physical glasses were made to work in a physical environment even if a person performs various actions such as blinking, shaking or smiling to avoid liveness detection engines in modern face recognition.

 

@adversa.aiAdversarial glasses can bypass physical AI Facial recognition and turn everybody into Elon Musk. ##cybersecurity ##hacker ##Ai ##facerecognition

♬ High Hopes – Panic! At The Disco

 

Attack transferability to various environments 

The main feature of this attack is that it’s applicable to various AI implementations. It’s constructed in such a way that it can adapt to the environment. There are some device and preprocessing differences.

Environment features

  • Light
  • Brightness
  • Distance to an object

 

Device features

  • Resolution quality
  • Color rendering

 

Preprocessing  features

  • Codecs compression
  • Data transfer compression

 

All these variables may differ in the real cyber-physical attacks. Our combination of various approaches has allowed us to build an exploit (in our case – physical glasses) that can be constructed in such a way that it can work in various environments.

 

 

Who can become a victim 

AI driven facial recognition technologies is a multibillion dollar market which is growing rapidly. Even if surveillance use of facial recognition is highly regulated, there are at least the following areas where the use of facial recognition is ethical and favourable from a monetary perspective.

  1. Face authentication (banks,  payment platforms, trading)
  2. Face payment (ATM, retail, smart cities)
  3. Physical security (hotels, offices, airports)

 

 

How to protect from such an attack

Unfortunately, there is no one-size-fits-all protection from such attacks due to the fundamental issues in deep learning algorithms. It’s a complex problem that involves multiple actions where each of them can reduce the risks of such attacks. Also they do not become immediately protected, you have to grow it and train them with security in mind as we teach our kids to be healthy. 

The main steps to perform 

  1. Start with adversarial testing, or AI Red Teaming. It’s an analog of traditional penetration testing in AI. It’s an absolutely necessary thing for any AI and is similar to hand washing to prevent viruses. You can download recommendations on performing AI Read Teaming in the form bellow
  2. Develop and train AI with respect to adversarial scenarios by adversarial training, model hardening, data cleaning and other defensive methods, like training kids to do sport to stay healthy. You can subscribe to our newsletter.
  3. Learn about and detect new threats for AI in critical decisions making applications by constantly following the latest inventions in adversarial machine learning, like training kids for advanced skills if they plan to be involved in extreme sport activities. You can find the latest news here. 

 

 

How we can help 

In the wake of interest on practical solutions for ensuring AI solutions security against advanced attacks, we developed our own technologies for testing facial recognition systems for such attacks. We are looking for early adopters and forward-thinking technology companies to partner with us on implementing adversarial testing capabilities in your SDLC and MLLC and increase trust in your AI applications and provide customers best-of-breed solutions.


Get the details

Download the slides

More information about the physical attack against facial recognition applications is available in the presentation.