New on the Menu: adversarial attacks in finance, energy and the physical world

Adversarial ML admin todayOctober 31, 2019 490

Background
share close

This is Adversa’s overview of the most exciting developments in AI Security in October 2019.  For an appetizer we have possibilities for financial fraud, followed by a physical attack on self-driving cars and a helping of AI defense frameworks. Read all the way to the desert and learn which T-shirts make you invisible to person detection systems. 


Adversarial Learning of Deepfakes in Accounting

Artificial Intelligence has become part of most industries by now. And as experience shows, it’s never completely secure. Schreyer at al. develop and launch the first attacks that target financial audits. The anomaly replacement attack covers up the evasion of invoice approval. The anomaly augmentation attacks mask anomalies by adding adversarial journal entries. It is meant to “cover up rarely used general ledger accounts, user accounts, or document types”. Both attacks aim to dodge international audit standards and mislead investors regarding the financial state of the company. 

Attacking Vision-based Perception in End-to-End Autonomous Driving Models

Every major car manufacturer, as well as some tech giants like Amazon, Google, and Intel, are racing to create a truly autonomous vehicle. Amid the haste and billion-dollar investments, concerns for safety arise as deep learning networks, that self-driving cars operate on, are susceptible to adversarial attacks. Boloor et al. have created the most threatening one yet. This end-to-end adversarial example is a black line painted on the road that can force a self-driving car to veer into the wrong lane or even off the road. The same lines can be used to hijack a car  —  make it take a turn off route. The effect is achieved because the model perceives these lines as road curbs or obstacles. The example is generated using Bayesian Optimization coupled with a strong objective function. 

The researchers emphasize that more diverse training and efforts to making models more robust are necessary before self-driving cars can become part of the traffic.

Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications 

Artificial neural networks (ANNs) are powerful tools in the field of artificial intelligence. And even though they are modeled after a human mind, we don’t completely understand how they work and we can’t anticipate their behavior. In addition to this unpredictability, every ANN can be fooled by an adversarial attack. All this contributes to a lack of trust and limits ANNs’ use in security systems. 

Attacks on neural networks become subjects of research more often than measures that build trust in AI. Yet, we do need defenses, security evaluation frameworks, and regulations to implement AIs. The work of Venzke et al. caught our attention because it focused on creating a methodology for evaluating the robustness of neural networks in electrical power systems. They defined continuous ranges of inputs that the neural network classifies as safe, i.e. where no adversarial examples exist. This first formal guarantee of the ANNs behavior makes them more interpretable. It also creates a procedure for identifying adversarial examples, which, in turn, allows specialists to re-train networks and improve their robustness.

Adversarial T-shirt! Evading Person Detectors in A Physical World

Some hope that adversarial attacks won’t work in a real-world setting. Xu et al. disprove the notion by creating the first wearable attack that allows a moving person to evade detection. Previously, researchers couldn’t overcome the deformations that appear when people move. Those deformations, wrinkles would partially occlude adversarial patterns printed on clothing and prevent them from consistently fooling classifiers. Xu et al. accounted for these deformations making their attacks TPS-based. And in doing so they achieved a 57% attack success rate in a real-world setting, even while attacking two object detectors at the same time.

The progress in attacking person detection is stunning. In April 2019 we described the new ground-breaking attack printed on cardboard and mused that “one day all an intruder would need to enter unnoticed would be a printed T-shirt”. In September attacks were already printed on clothing but were only tested in static images. This rapid development does make us wonder about the future of security systems that rely on person detection technologies.


Check out more of our digests in Adversa’s blog.  And tune in to our Twitter to keep up with new developments in AI Security.

Written by: admin

Tagged as: , , , , .

Rate it
Previous post