Best of Adversarial ML Week 31 – Robust and invisible backdoor attack

Adversarial ML admin todayAugust 12, 2021 69

Background
share close

The Adversa team makes for you a weekly selection of the best research in the field of artificial intelligence security


Poison Ink: Robust and Invisible Backdoor Attack

Confirmed that deep neural networks are vulnerable to different types of attacks; among them, backdoor attacks are of the greatest interest to researchers, since they can occur at almost any stage of the ML pipeline. However, the main disadvantage of modern backdoors is that they are either visible or can be prevented with light pre-processing.

For this reason, the researchers Jie zhang, Dongdong Chen, Jing Liao, and others propose a robust and invisible  “Poison Ink” ‘backdoor attack.

The essence of the attack is that the image structures are used as target poisoning areas and then they are filled with so-called poison ink, that is, with information in order to create the trigger pattern, which is robust to data transformations. After that, a deep injection network is used to embedding this trigger pattern into the cover image. According to the conclusion of the researchers, the attack demonstrated high efficiency and stealth compared to existing counterparts.

On the Robustness of Domain Adaption to Adversarial Attacks

Modern deep neural networks have high performance with uncontrolled domain adaptation (UDA), but their performance is significantly reduced while being attacked with the help of adversarial samples. And since a large number of studies on the topic of adversarial attacks have paid little attention to the robustness of the unsupervised domain adaption model, the researchers Liyuan Zhang, Yuhang Zhou, and Lei Zhang first decided to address the problem of the robustness of unsupervised domain adaption against adversarial attacking.

The authors describe a cross domain attack method based on pseudo label analyzing the impact of different datasets, models, attack methods and protection methods. Researchers conclude that unsupervised domain adaptation models have limited robustness and that this issue should be seriously considered in the professional community.

AdvRush: Searching for Adversarially Robust Neural Architectures

Despite its high performance, deep neural networks can suffer greatly from advserarial examples invisible to the human eye. The main efforts to combat this problem are to develop more reliable training methods.

In this paper, researchers Jisoo Mok, Byunggook Na, Hyeokjun Choe, and Sungroh Yoon address the issue of designing an adversarially robust neural architecture with high intrinsic robustness by offering AdvRush, a unique adversarial robustness-aware neural architecture search algorithm. The new algorithm has a special regularizer that is able to select a candidate architecture with a smoother input loss landscape. Thanks to this, AdvRush is able to successfully detect resistance-resistant neural architecture.

 

Written by: admin

Rate it
Previous post