Hidden in Plain Sight: Attacks that Make Us Invisible

Adversarial ML admin todayApril 30, 2019 89

Background
share close

In April we fool. We fool visual classifiers and speech recognition systems with stickers, print outs, and imperceptible commands. Here is a rundown of the most impressive (and terrifying) ways the researchers attacked ML models in April 2019.


Fooling automated surveillance cameras: adversarial patches to attack person detection

Belgian students Thys and Ranst set out to create an “adversarial patch” that would prevent algorithms from identifying humans on footage. To develop this adversarial example they worked with images of real people, which presented both a difficulty and an opportunity. On the one hand, people vary significantly in appearances, dimensions, patterns of movement. In short, people have intra-class variety. On the other hand, using real pictures meant that the researchers were not limited to annotated datasets. This allowed them to use their own footage in development and test the result in real life.

In the end they came up with a 40cm×40cm adversarial patch, a pattern printed on cardboard. This patch is able to fool CNNs that, for example, are used in surveillance systems. In fact, it lowers their effectiveness to 26%. This achievement means that, in theory, all an intruder needs to enter unnoticed is a printed T-shirt.


Adversarial camera stickers: A physical camera-based attack on deep learning systems 

All known physical adversarial attacks operated on the same principle – altering characteristics of an object by, for example, applying stickers. Li et al. approached physical adversarial attacks from a new angle. They have crafted a clear sticker with several colorful dots that is applied to the camera lens. The sticker is produced using a regular store-bought printer and paper. And the perturbations look like blurs that are likely to be mistaken for dust on the lens. Yet, the classifier reliably mislabels the target object regardless of the angle and scale of the footage.

In other words, the researchers have proved that anyone can launch a successful mostly-imperceptible adversarial attack. And they wouldn’t even need to access the object of interest. 


Robustness of 3D Deep Learning in an Adversarial Setting

As of April 2019, the absolute majority of robustness analysis tools were not suited for 3D deep learning algorithms. Our inability to test them held back the development of self-driving vehicles among other things. Wicker and Kwiatkowska laid out the theory for future testing. Specifically, they explored vulnerability to occlusion attacks. These occur in the real world even without adversarial intervention, e.g. a bush blocking a pedestrian from view. The researchers crafted an algorithm that determined the existence of adversarial occlusion examples for a particular model, generated and performed the attack. Analysing the outcomes, they made two oppositional findings:

  • the fewer points of input the network needs for classification, the less likely it is that a critical point would be occluded;
  • The fewer critical points the network uses, the more susceptible it is to manipulation. 

When it comes to practical application, the results of Wicker and Kwiatkowska’s work are concerning. They were able to reduce the accuracy of a network to 0% while only manipulating 6.5% of 3D input. All in a black-box setting.


Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems

 “Your phone is listening to you,”  —  a commonly used scare-tactic. However, as Abdullah et al. prove, you are in real trouble when your phone starts listening to someone else. 

The team of researchers from the University of Florida created an adversarial attack on Voice Processing Systems (VPSes). Usually, such attacks target a specific machine learning model. But they attacked the feature extraction of the signal processing phase instead. And since most speech recognition models rely on the same set of features, the resulting attack is model-agnostic black-box and works with most speakers and microphones.

Specifically, Abdullah et al. generated malicious voice commands. To make them sound like background noise, they take advantage of the difficulties humans have interpreting speech in the presence of high-frequency noises. 

In practice, an adversary could order your Alexa to unlock a door, and you would be none the wiser.


Check out more of our digests in Adversa’s blog.  And tune in to our Twitter to keep up to date with new developments in AI Security.

Written by: admin

Tagged as: .

Rate it