Attacking object detection models with dynamic patches

Adversarial ML admin todayOctober 31, 2020 564 5

share close

Despite the fact that deep neural networks have become an integral part of highly vulnerable activities, they are highly susceptible to adsarial attacks. We have compiled a selection of the most interesting studies for October 2020.


Dynamic Adversarial Patch for Evading Object Detection Models

Deep neural networks (DNN) have become widespread today. In computer vision, DNNs are used for object detection, which is often implemented in object detection models, self-driving cars, security cameras and so on. It has been demonstrated that such DNNs are vulnerable to adversarial attacks including poisoning and evasion attacks. Researching such attacks is extremely important, since an attack on DNN can lead to critical consequences.

The main principle of common adversarial attacks on object detection models includes adding small changes to an image making the system mistake in its classification. However, this method of attacks has shown its weak effectiveness in the case when the attacked camera changes its physical location, as well as in cases with 3D objects. 

Researchers have recently demonstrated real-life adversarial attacks on nonplanar objects that help 3D objects, such as humans, avoid detection. New adversarial attacks are more complex than previously existing ones and are based on the use of dynamic adversarial patches generated by adversarial learning algorithms and located in a number of predefined places. The attack is performed via switching between patches at the right time. During the research, the YOLOv2 object detector was attacked using a car with patches placed on it. As a result, the system incorrectly identified the object in 90% of filming the car from different angles.


VENOMAVE: Clean-Label Poisoning Against Speech Recognition

The number of smart devices available to the end user is increasing every year. For ease of use, some of them use Automatic Speech Recognition (ASR) systems. However, according to some studies, such systems are vulnerable to adversarial examples. In this case, we are talking about malicious audio inputs that affect the operation of devices when they are turned on. Until now, the question has been whether such systems are vulnerable to poisoning attacks when malicious inputs are placed into the training set during the training phase.

In this paper, we researchers demonstrated the first audio data poisoning attack called VENOMAVE. Previously discovered types of poisoning attacks that are effective in the case of image recognition systems do not apply to audio format. The main problem is the need to attack a time series of inputs. To achieve the desired result, it is necessary to generate a certain sequence of malicious inputs, which will simultaneously attack the system and ultimately be decoded into the desired phrase. Also, researchers aimed to come up with several misclassification tasks making the system not recognize all the frames of the chosen audio in each of them.


Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems

The health sector deals with huge amounts of data every year. Smart technologies play a huge role in helping with analyzing patient data and managing the treatment process; machine learning is often used to process disparate data in a Smart Healthcare System (SHS). 

Many recent studies demonstrate the vulnerability of ML models in different areas of application, and this study will focus on adversarial attacks aimed specifically at the ML classifiers involved in a SHS. To perform the attack and empower adversarial capabilities, the attack needs certain knowledge of data distribution, the SHS model, and ML algorithm. As a result, it becomes possible to affect medical device readings and manipulate patient status. To perform a number of malicious actions on a SHS, the researchers applied several adversarial algorithms, which are HopSkipJump, Fast Gradient Method, Crafting Decision Tree, Carlini & Wagner, Zeroth Order Optimization. As part of the research, a SHS was affected with data poisoning, misclassification of outputs, white box and black box attacks. As a result of extensive research, it was found that adversarial attacks can significantly affect the operation of ML-based systems in terms of working with patient data, in particular, making diagnoses, as well as further treatment.

Written by: admin

Rate it
Previous post

todayOctober 30, 2020

  • 61
  • 1
close

Company News admin

ITBN CONF-EXPO 2020

ITBN is a professional forum that brings people close to the latest trends and innovations through the EXPO of national and international organizations and famous lecturers’ presentations. ITBN stimulates the ...