Best of Adversarial ML Week 24 – Lip motion patterns secure face liveness detection

Adversarial ML admin todayJune 23, 2021 55

share close

Our team makes for you a weekly selection of the best research in the field of artificial intelligence security


Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns

Deep learning models are used for face authentication, which have high recognition accuracy. However, the weakness of such models is that they are vulnerable to manipulation using digital analogs of human faces. Despite a number of security developments, attackers can still attack these models.

Researchers Man Zhou, Qian Wang, Qi Li, Peipei Jiang, Jingxiao Yang, Chao Shen and others are concerned with the security of existing facial recognition services like Microsoft, Amazon and Face ++. One of the researcher’s important developments is projects manipulated face videos on a 3D face model able to pass face authentication services and liveness detection approaches.

Researchers recently introduced FaceLip, which is a novel liveness detection scheme for face authentication. The system uses unique patterns of lip movement based on special acoustic signals. This method is capable of providing a high level of security.

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

Data curation for machine learning is becoming increasingly automated, and there are more and more cases of spoofing datasets for these models.With the help of backdoor attacks, hackers modify datasets and insert a trigger into the data that will then be used for training. On the other hand, Hidden Trigger Backdoor Attack performs poisoning with no trigger at all. 

Still, an attack with a hidden trigger is ineffective against neural networks trained from scratch. Researchers Hossein Souri, Micah Goldblum, Liam Fowl, Rama Chellappa, Tom Goldstein developed a new type of attack with a hidden trigger Sleeper Agent the principle of which is based on gradient matching, data selection, and target model re-training employed during the crafting process. The effectiveness of the attack was demonstrated at s on ImageNet and in black-box setting.

PopSkipJump: Decision-Based Attack for Probabilistic Classifiers

Adversarial inputs can operate on most modern classifiers.

Researchers  Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, and Andreas Krause introduce a new adversarial query efficient decision-based attack developed exclusively for classifiers with probabilistic outputs. The attack is based on the  HopSkipJump attack by Chen et al. and demonstrated high efficiency.

Written by: admin

Rate it
Previous post