Best of Adversarial ML Week 29 – Perceptibility of black-box adversarial attacks on face recognition

Adversarial ML admin todayJuly 28, 2021 110

Background
share close

The Adversa team makes for you a weekly selection of the best research in the field of artificial intelligence security


Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition

Today, a huge number of images of human faces are stored on the Internet, especially in social networks. At the same time, modern Face Recognition systems are able to match these images with names and personalities, which creates certain privacy problems.

However, adversarial attacks can come to the aid of users preventing FR systems from recognizing people.  

In the recent research, Benjamin Spetter-Goldstein, Nataniel Ruiz, and Sarah Adel Bargal have assessed the effectiveness of available black-box attacks in face recognition and also investigated their respective human susceptibility based on survey data. As a result, the researchers have highlighted the trade-offs in perceptibility taking place in the situation when attacks get more sophisticated. The paper also demonstrates that the applied metrics, including  the ℓ2 norm, do not correlate with perceptibility of a person lineary.

 All the mentioned above brings us to the conclusion that the community is currently in need of new metrics that would be better at the correlation between human perception level and adversarial examples quality.

A Systematical Solution for Face De-identification

It is no secret that people pay great attention to the protection of face data privacy, since the security of their data in general depends on it. In the context of different tasks, the requirements for face de-identification (De-ID) may vary, and for that reason, Songlin Yang, Wei Wang, Yuehua Cheng, and Jing Dong offer a systematic solution that is compatible with such De-ID operations. 

First, an attribute disentanglement and generative network is created in order to encode two parts of the face, including the identity features and expression features.  Here, with  face swapping, the original ID can be completely removed. Also, the researchers have introduced an adversarial vector mapping network to change the latent code of the face image. With this method, it becomes possible to construct unrestricted adversarial images and decrease ID similarity recognized by a model.

EvilModel: Hiding Malware Inside of Neural Network Models

The hidden installation of malware and the ability to avoid detection has become a hot issue for advanced malware campaigns.

Zhi Wang, Chaoge Liu, and Xiang Cui have demonstrated a neural network models-based method of delivering malware covertly and detection-evadingly. NN models are known to have an ability of  good generalization and at the same time can be poorly explained. By embedding malware into neurons, malware can be installed as invisible as possible with virtually no impact on the performance of neural networks passing the antivirus security scan. As a result of the experiment, it was found that 36.9 MB of malware can be embedded in the AlexNet model of 178 MB with a 1% loss of accuracy and antivirus engines VirusTotal will not arouse any suspicion. 

The researchers hope that this work could serve as a reference script for defending against neural network attacks.

 

Written by: admin

Rate it
Previous post