Hack facial recognition AI on the annual MLSec competition and win the Prize!

Company News admin todayAugust 12, 2022 388

Background
share close

Four months ago, Adversa presented the world’s first Fool AI Art Exhibition to the public. We tricked the AI-driven face recognition system 100 times, released it as the world-first piece of digital art that AI and humans see differently, and saved it as NFT.

Now we went one step further and prepared a particular track during the annual machine learning evasion competition where you can try yourself to perform such attacks or maybe even better ones. The annual Machine Learning Security Evasion Competition has just started and will last until the 23th of September, and you can win up to $9000.

Here we are sharing some background information about the competition so you can be better prepared.

And last but not least, in our blog, you will find more details on how to perform such attacks, so subscribe to be the first ones to have access to unique tips & Tricks that will help you to win this competition. 

Problem

Facial recognition is used to identify human faces using unique mathematical and dynamic patterns that make this system one of the safest and most effective ones.

However, as with any technology, there are potential drawbacks to using facial recognition, such as threats to privacy, violations of rights and personal freedoms, potential data theft, and other crimes. The new track has been added in reaction to the rapid adoption of biometrics and surveillance and the potential security threats it brings.

“Biometric AI-based facial applications are among the most vulnerable, yet most widely deployed – as shown by Adversa’s unique knowledge base of all existing attack methods across AI-driven industries – together with Automotive, Finance, and Internet,” says Neelou. “Our cyber-physical AI Red Team at Adversa AI has never seen a non-vulnerable ML model, API service, or smart physical device for facial recognition – and it’s terrifying.”

During the Facial Recognition track, the contestants will get a dataset of facial imagery to modify so that they would be recognized as another person by the model. For each face recognition image that evades the model, contestants will receive points if the modified face looks identical to the original face. The highest-scoring contestant wins.

Story

An internet company wants to reinvent the experience for its website audience and use their faces instead of passwords.

To implement this visionary idea, the company’s data scientists have built a model to recognize user faces for authentication.

The internet isn’t always safe, so their AI Red Team implemented some hardening techniques after adversarial testing.

Before the official model rollout, the internet company requested some help from AI and cybersecurity communities to test their model and see if it could be hacked. 

AI Model & Dataset

The internet company has built an AI system with models that detect and recognize people’s identities.

Their system takes photos as an input, and if the face is detected, it’s cropped and used for recognition.

If confidence in the target identity is high enough, a user will be authenticated into their account.

The data for facial recognition evaluation includes photos of 10 selected website users.

Images are provided in a PNG format with filenames ranging from 0-9 representing user IDs.

Challenge

Contest participants compete in crafting efficient evasion and impersonation attacks against the facial recognition model.

The goal is to modify user photos to evade an original identity and impersonate a target identity while preserving image quality.

Winners are defined based on the best metrics from all of submitted 90 attack pairs.

The main competition metric is Confidence with Stealthiness used as the tie-breaker metric.

Confidence. Range (0,1), high is good. Shows how similar  adversarial image to a target class.

Stealthiness. Range (0,1), high is good. Shows structural similarity of an adversarial image to a source image.

Success. Range (True, False), True is good. Shows if an adversarial image is recognized as a target class successfully AND if an adversarial image preserves original image quality defined by stealthiness>=0.5.

Outcomes

Here we  briefly explain to you how and why this competition was created. Now it’s your turn to win up to 9000$ and demonstrate your competence at the most prestigious AI hacking competition.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post