Best of Adversarial ML Week 21 – Cross-model adversarial watermark for combating deepfakes

Secure AI Weekly + Adversarial ML admin todayJune 2, 2021 181

share close

Our team makes for you a weekly selection of the best research in the field of artificial intelligence security


CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes

Deepfakes have become practically one of the main problems of artificial intelligence technologies and data privacy: with the help of deepfake technologies, you can not only harm someone else’s reputation, but also endanger even a person’s property. Still adversarial watermark that can be used to combat deepfake patterns by making them generate distorted digital files. 

A new report by Hao Huang, Yongtao Wang, Zhaoyu Chen and others overlooks the almost invisible Cross-Model Universal Adversarial Watermark (CMUA-Watermark), which demonstrates effectiveness against several deepfake models at once with the ability to protect thousands of faces. 

Adversarial Attack Driven Data Augmentation for Accurate And Robust Medical Image Segmentation

Segmentation is widely used in medical imaging. High-performance deep learning models are used for this, but they require a large amount of data to work effectively, which can be a problem for their effectiveness. 

Researchers mst. Tasnim Pervin, Linmi Tao, Aminul Huq and others propose a new method for data augmentation: adversarial learning attack technique, Fast Gradient Sign Method (FGSM) in particular. This method is the opposite of to FGSM for the data augmentation. Together, the methods help in enhancing the accuracy of segmentation, and also increase the resilience of models to attacks.

Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger

Machine learning models are susceptible to a wide variety of attacks, one of which is backdoor attacks. A model is being injected during the training phase, then it will release attacker-defined output on inputs embedded with pre-designed triggers while with normal inputs during inference it will behave as usual. 

In this study, Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang and others  try to consider syntactic structure as the trigger in textual backdoor attacks. In the course of the study, various experiments are carried out during which it is demonstrated that these attacks can achieve almost the same attack performance to the insertion-based methods while demonstrating much greater invisibility and greater resistance to defense which prives a significant insidiousness and harmfulness of text backdoor attacks.

Written by: admin

Rate it
Previous post