Best of Adversarial ML Week 33 – Adversarial Attack to DNNs by dropping information

Adversarial ML admin todayAugust 26, 2021 63

Background
share close

The Adversa team makes for you a weekly selection of the best research in the field of artificial intelligence security


AdvDrop: Adversarial Attack to DNNs by Dropping Information

While the human eye can recognize objects without high detail, for example, in cartoons, for DNNs this task is still a problem.

Ranjie Duan, Yuefeng Chen, Dantong Niu, Yun Yang, A. K. Qin, and Yuan He decided to investigate this issue from an adversarial point of view and checked how much DNNs would be reduced productively if a small amount of information in the image was lost. The new method is in fact  new improved evasion attack, where the quality of images is increased and improved by reducing noises, artifacts, and so on (in comparison with PGD, and other state-of-the-art attacks). Researchers demonstrate that this new type of adversarial examples is more difficult to be defended by current defense systems.

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

The adversarial patch attack against image classification models is based on the introduction of hostile pixels into a specific area of ​​the image, which causes the model to be misclassified. A similar attack is also possible in the renal world by printing a special image-patch. 

Researchers Chong Xiang, Saeed Mahloujifar, and Prateek Mittal came up with PatchCleanser as a method of protecting image classifiers. Within the framework of the method, two rounds of pixel masking on the input image are produced, as a result of which the effect of the adversarial patch is neutralized. The new protection method has been tested on the ImageNet, ImageNette, CIFAR-10, CIFAR-100, SVHN, and Flowers-102 datasets and has been found to be highly effective.

Adversarial Relighting against Face Recognition

Despite the fact that Deep Face Recognition (FR) has achieved significantly high accuracy, in the real world such systems often face the problem of illumination, which is considered to be the main problem for such systems. For this reason, researchers Ruijun Gao, Qing Gao, Qian Zhang, Felix Juefei-Xu, Hongkai Yu, and Wei Feng decided to pay attention to the threat of lighting against FR from adversarial attacks point of view, and focus on a new task, ie, adversarial relighting.

The attack based on adversarial relighting takes into account the image of the face and creates a naturally relighted counterpart by fooling modern deep FR techniques. The attack was tested on three modern deep FR methods, that is, FaceNet, ArcFace, and CosFace, on two public data sets.

Written by: admin

Rate it
Previous post