Best of Adversarial ML Week 28 – Adversarial attacks on autonomous driving visual perception

Adversarial ML admin todayJuly 21, 2021 94

Background
share close

The Adversa team makes for you a weekly selection of the best research in the field of artificial intelligence security


Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving

Over the past few years, deep neural networks (DNNs) have demonstrated impressive results in a variety of tasks, including those related to autonomous driving perception.However, despite this success, DNNs have also demonstrated a susceptibility to attacks that can seriously compromise the operation of mission-critical applications, so it should come as no surprise that security and attack issues on such applications have become particularly popular.

In the given research, Ibrahim Sobh, Ahmed Hamed, Varun Ravi Kumar, and Senthil Yogamani have applied the detailed adversarial attacks on a multi-task visual perception deep network. The experiment was carried out across motion detection, distance estimation, semantic segmentation, and object detection involving both targeted and untargeted cases in white and black box attacks. The researchers also examined the effectiveness of simple defenses for the attacks under study and described the findings of the study.

Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting

Xiangyu Qi, Jifeng Zhu, Chulin Xie, and Yong Yang have released a study examining the realistic potential of a backdoor attack on deep neural networks (DNN) on the deployment stage. The researchers’ goal was to develop a backdoor attack algorithm during the deployment phase that is both dangerous and can be performed in reality.

Therefore Subnet Replacement Attack (SRA) has been proposed. With the help of this attack, it is possible to place a backdoor in DNNs by altering a limited number of model parameters. The attack follows a gray box scenario with information about the architecture of the targeted model available, but the parameter values remain unknown. At the same time, for any instance of a neural network of a certain architecture, a backdoor can always be built, replacing a narrow subnet with a malicious backdoor subnet designed to trigger a large activation value against a specific backdoor trigger pattern.

EvoBA: An Evolution Strategy as a Strong Baseline for Black-Box Adversarial Attacks

It has recently become known that white-box adversarial attacks can be easily applied to image classifiers. However, most of the available approaches are still more reminiscent of black-box adversarial scenarios with a lack of transparency and limitations of the query budget.

Andrei Ilie, Marius Popescu, and Alin Stefanescu offer EvoBA, a black-box adversarial attack. The method is based on a very simple evolutionary search strategy while being query-efficient and able to minimize L0 adversarial perturbations. At the same time it does not require any training. According to researchers, the method can be used to assess the image classifiers’ empirical robustness. Also, the method can be easily integrated in image classifiers development pipelines.

Written by: admin

Rate it
Previous post