Looks are everything: how algorithms use biometrics to protect and endanger us

Adversarial ML admin todayMay 31, 2019 50

Background
share close

We offer a brief overview of the May 2019 research papers that caught our interest. This time our focus is on biometric-based data, current opportunities and potential problems it creates. You will also learn about the hard choice we as humans are facing now that scientists gain a better understanding of how models “read” images.


Adversarial Attacks on Remote User Authentication Using Behavioral Mouse Dynamics

We are used to relying on passwords and PINs i.e. static authentication, to protect our data. Recently these are being augmented and replaced by physiological biometric authentication. Our faces, fingerprints, and irises are impossible to steal but they can be imitated and require costly hardware to become our virtual keys. To overcome these issues scientists are developing algorithms that would continuously identify users and safeguard access to data by analyzing user behavior such as keyboard use and mouse dynamics. 

Xiang et al. set out to determine how reliable and robust Mouse Dynamics Models are in the face of realistic adversarial attacks. According to their research, if an attacker has access to a record of the target’s mouse movement, they can train an imposter-model that would imitate user behavior well enough to fool the system in 40% of cases. There are ways to make the odds more favorable. The authors suggest implementing authentication systems that use multiple authentication models and choose one of them randomly at any given moment. Until better safeguards are in place, we will have to make do with the fact that attacking a system based on behavioral features requires way more effort than hacking a password or copying a fingerprint.


Fooling Computer Vision into Inferring the Wrong Body Mass Index 

A combination of computer-vision algorithms and machine learning can accurately infer your Body Mass Index just by analyzing a picture of your face. Such technology could allow insurance companies to set rates depending on the perceived health of their client. However, it would also allow manipulation of the insurance rates as the system would be susceptible to white-box adversarial attacks. 

Levin et al. have proven that by changing 5 to 255 pixels in a photo, too small of an amount be noticeable to a human, an attacker can fool the Face-to-BMI regression algorithm. In a practical sense, a person can be sabotaged to pay higher rates or appear healthier than they are to be charged less. The vulnerability also poses the question: can the system be manipulated through physical attacks such as clothing and makeup?


Adversarial Examples Are Not Bugs, They Are Features

When people determine what we are looking at, we take in the features of the objects and compare them to what we know of the world. In other words, we only rely on features and concepts that we can point out and define. However, models use any signals they can, even the ones imperceptible to humans. This creates a necessity to make a choice between accuracy and interpretability. A model can not be expected to only rely on features meaningful to humans if it is to perform to its maximum accuracy and unless it is specifically trained with human limitations in mind. At the same time, not placing such limitations leaves the opening for adversarial vulnerability  —  a possibility for any classifier to be fooled by a change in non-robust features, highly predictive patterns in data that humans do not notice. 

Authors of the paper argue that such a possibility pinpoints a potential strength: even ignorant about features that humans use to determine similarity, a model can accurately classify images. Moreover, non-robust features are inherent to the data, identified similarly by many models trained on many datasets. And according to Ilyas et al., it is the reason behind adversarial transferability  —  the explanation as to why if one model is compromised by an adversarial example, so would be many others.

Learn more about adversarial examples in our April digest. There we describe in detail the creation of an adversarial patch that makes surveillance systems ignore humans.


Machine learning and artificial intelligence are sometimes prescribed exaggerated power. One way that researchers are avoiding mistakes brought on by excessive confidence and folk tales is working to sabotage their colleagues’ creations. And in this way, they get a very hands-on and engaging peer-review environment and a deeper understanding of the models we create.

Check out more of our digests in Adversa’s blog.  And tune in to our Twitter to keep up to date with new developments in AI Security.

Written by: admin

Rate it

Previous post