Modified Simple black-box attack affects traffic scene perception

Adversarial ML admin todayJanuary 31, 2021 35

share close

In some areas of human activity, for example, in the field of health care or autonomous transport, the security of neural networks is critical, therefore research in this area is of great importance. Here is a selection of the most interesting ones for January 2021.


Black-box Adversarial Attacks in Autonomous Vehicle Technology

It is no secret that deep neural networks are used in many spheres of human activity and are currently highly vulnerable to adversarial attacks. However, their safety is especially critical in some areas, such as autonomous vehicles, in which the physical safety of a person directly depends on the safety and security of the system. The consequences of attacks can be so critical that as a result of misclassification, a car can, for example, misread a road sign, leading to a road accident.

In the work, the researchers proposed a new query-based attack method called Modified Simple black-box attack (M-SimBA). A novel multi-gradient model for designing a black-box adversarial attacks is aiming to affect traffic scene perception. The new attack method has demonstrated better generating of successful mis-predictions at a faster rate with a higher probability of failure in comparison with the past attack methods. The work may be of interest to both direct users of autonomous transport and researchers for further study of security issues.


DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

Unfortunately, the issue of the security of the NN component of such applications is questionable, as methods of attacks on neural networks in mobile applications have been little studied.

In this paper, the researchers propose a backdoor attack performed with the help of a set of reverse-engineering techniques; the advantage of this attack is that it has proven to be effective in real-world conditions.The attack is based on a neural conditional branch consisting of a trigger detector and a number of operators; they must be injected into the targeted model as a malicious payload. The conditional logic itself is customizable and scalable while no prior knowledge from the victim model is needed. The attack was tested on 5 supreme deep learning models and real-world samples of 30 users. As the result of the research, the attack success rate was 93.5%, and at least 54 popular applications turned out to be vulnerable to the new attack.


Adv-OLM: Generating Textual Adversaries via OLM

While deep learning models have been implemented a lot  across different domains, including NLP tasks, their vulnerability to adversarial examples actualized the issue of studying potential adversarial attacks. Still, due to the discredity of the text data, the performance of adversarial attacks on such data is quite difficult. The injected sample has to to be semantically sound and grammatically correct as perturbations introduced in the attack should not arouse suspicion in a person. 

Therefore the researchers presented Adv-OLM, a black-box attack method implementing the idea of Occlusion and Language Models (OLM) to the available attack methods. With the help of a new Adv-OLM method, words of a sentence are ranked and then replaced based on word replacement strategies. The experiment demonstrated that comparing to previous attack methods, the proposed Adv-OLM method has a higher success rate and lower perturbation percentage. 

Written by: admin

Rate it
Previous post