Towards Trusted AI Week 36 – A new technique to stop adversarial attacks

Secure AI Weekly admin todaySeptember 13, 2021 50

Background
share close

 

AI serves not only for good. Adversaries can use it and advance their attacks


Researchers have created a new technique to stop adversarial attacks

The Next Web, September 7, 2021

As artificial intelligence systems become components of many critical applications, the issue of their security is growing.

At the same time, adversarial attacks are at the top of the list of threats with their ability to influence the behavior of the target machine learning model.

The new method was presented by researchers from Carnegie Mellon University and the KAIST Cybersecurity Research Center. The method was presented at the Adversarial Machine Learning Workshop (AdvML) of the ACM Conference on Knowledge Discovery and Data Mining (KDD 2021). It uses unsupervised learning to address some of the problems associated with existing techniques used to detect adversarial attacks. A distinctive feature of the new method is that it takes advantage of machine learning explainability techniques to figure out which inputs might have gone through adversarial perturbations.

“Our recent work began with a simple observation that adding small noise to inputs resulted in a huge difference in their explanations,” Gihyuk Ko, Ph.D. Candidate at Carnegie Mellon and lead author of the paper, commented.

Enemy at the Gates – The untold chronicles of AI Security

Economic Times, September 7, 2021

Taking care of the safety of AI is important not only from the point of view of the well-being of the development companies, but also from the point of view of the end user who will have to interact with the product.

In twenty-four hours after being launch, a social media chatbot became racist and offensive. The chatbot was supposed to learn how to communicate from people, but things did not go quite as planned, and the negative impact radically influenced his behavior. In the end, the chatbot had to be closed and the developer company apologized for what happened.  This situation as a whole demonstrates how fragile and vulnerable a fairly complex system can actually be.

The question of the safety of artificial intelligence is much broader than it might seem at first glance, and neglect of it can lead to a number of consequences such as loss of trust in products, damage to a brand, loss of intellectual property. Artificial intelligence systems are becoming more widespread, more and more money is being invested in them, and therefore they can easily turn out to be the next target of attackers. The good news is that companies are increasingly focusing on the security of their products, and in this way we can come to the point where the overall level of security of artificial intelligence systems will be higher.

 

Stopping Deepfake Voices

VSC Viterbi, September 10, 2021

USC Viterbi researchers have come up with solutions for more reliable and trusted voice recognition.

Today, the voice is as important your identifier as, for example, the retina of the eye or a fingerprint. Despite the fact that literally some time ago the idea that someone could fake your voice sounded completely unrealistic, today we often deal with voice deepfakes.

The research team led by Shrikanth Narayanan, University Professor and Niki & Max Nikias Chair in Engineering, is focusing of the issues of how vulnerable our smart technologies are to deepfakes. Their findings were published in Computer Speech and Language in February 2021 in  “Adversarial attack and defense strategies for deep speaker recognition systems.”  The researchers stressed that speech audio can be attacked directly “over the air”. It turned out that strong attacks could reduce the accuracy of a voice speaker recognition system from 94% to even 0%. They also reveal some potencial countermeasures to deal with with adversarial attacks, which can become a good basis for the upcoming studies on adversarial attacks and defense strategies.

Written by: admin

Rate it
Previous post