The Greater Good: using AI for public safety

Adversarial ML admin todayNovember 30, 2019 68

Background
share close

Most uses for AI are not flashy or glamorous: there is rarely a Jarvis ready to prepare an iron suit. Instead, scientists are working on airport security systems, tempering with protest headcounts, sensitive content flagging, etc.  —  mundane systems that keep millions of people safe. Maybe, AI is a little glamorous after all, when we put it like that.

Let’s take a look at the most exciting research from November 2019. 


Evaluating the Transferability and Adversarial Discrimination of Convolutional Neural Networks for Threat Object Detection and Classification within X-Ray Security Imagery 

X-ray security screening has been part of our travel routine for a while. Nowadays scientists are looking for ways to make it speedier and more precise with computer vision. The task of spotting and classifying prohibited items in luggage is challenging because of the image complexity  — the screening produces color-mapped images that reflect shapes and material properties  —  and a high potential for adversarial attacks including objects that imitate shapes of prohibited items. Gaus et al. evaluated the accuracy and learning transferability of several models for threat object detection. According to their research, Faster R-CNN achieves an average precision of 87% with 5% of false positives. Compared to a stunning 70% failure rate of human-operated security systems (Forbes, 2017), it promises safer travel.


Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting  

To prevent political affiliations from affecting crowd counting at rallies and protests scientists rely on deep network algorithms. Liu, Salzmann, and Fua from the Swiss Federal Institute of Technology managed to develop the first classification network defense mechanism capable of detecting adversarial attacks at the pixel-level. They realized that when an attacker meddles with people density in the picture, they inadvertently alter scene depth as well. So it is possible to detect the attack by comparing depth statistics with that of earlier footage. Most promising is the fact that this detection method works even if the attacker is aware of the detection strategy and can access the adversarial detector.


Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction 

Adversarial examples are carefully designed inputs that result in the wrong output from the neural network. Recent research shows that an adversarial example capable of confusing one model will transfer across models trained on other datasets with different architectures and purposes. Lu et al. intended to design a new method of generating malicious samples that would set a new defense benchmark and be applicable to a wide range of real-world CV tasks: image classification, object detection, semantic segmentation, explicit content, and text detection. The Dispersion Reduction method they created focuses on the perturbation of low-level features, which allows the attackers to succeed even when they neither know the purpose of the target model, nor have the ability to query it. In simple words, the adversary doesn’t have to tailor the attack generation model. Instead, they can rely on a pre-trained public model and transferability to fool most systems.


Check out more of our digests in Adversa’s blog.  And tune in to our Twitter to keep up with new developments in AI Security.

Written by: admin

Tagged as: .

Rate it
Previous post