Towards trusted AI Week 16 – strengthen the security of face recognition systems

Secure AI Weekly admin todayApril 26, 2021 41

Background
share close

Neural systems research is needed to improve robustness


How to ensure your machine learning models aren’t fooled

InformationWeek, April 16, 2021

Any system that uses a neural network can potentially be hacked. Most often, cybercriminals use adversarial attacks for these purposes. However, while there are threats, there are ways to deal with them. Even if hacking cannot be prevented, these methods can significantly reduce the harm from a potential attack. 

The article examines potential dangers and methods of dealing with them in face recognition systems. For example, presentation attack is the simplest and most common way to attack such systems. In such an attack, the attacker simply shows the camera the face of the desired person or wears a realistic mask. Physical perturbation attack is associated with this method and consists in the fact that the attacker appears in front of the camera wearing a certain accessory, for example, glasses, the pattern of which misleads the system.

However, digital attacks are the most dangerous, as facial recognition systems are the most vulnerable to them. In them, an attacker is carefully working on an electronic file that is supposed to trick the system, down to working with single pixels. Among them. there are. for example, noise attacks, transformation and generative attacks.

In order to make the system invulnerable to attacks, it is necessary to work on machine learning robustness. Also, introducing adversarial examples into training can help a lot. This is just one of the aspects of strengthening the security of facial recognition systems, in any case, work on security should be carried out regularly and in different aspects.

Are Multilingual Language Models Fragile? IBM Adversarial Attack Strategies Cut MBERT QA Performance by 85%

Synced, April 22, 2021

To date, large language models have succeeded significantly in answering questions, but researchers have doubts about the reliability of these models. According to recent IBM research, state-of-the-art (SOTA) models can be very vulnerable when presented with adversarially generated data.

IBM researchers have paid attention to attacks on multilingual QA, which had previously been little studied. During the study, experts used multilingual adversarial attack strategies. The attacks were carried out against seven languages ​​in a zero-shot setting.  Researchers tested MBERTQA, a multilingual system trained with English only, and MT-MBERTQA, a multilingual system trained on six languages.

According to the results of the study, the new methods have demonstrated high efficiency and can be widely used for further increasing robustness in AI models.

New study explores deep neural networks’ visual perception

The Hindu, April 21, 2021

A recent research  from the Centre for Neuroscience (CNS) at the Indian Institute of Science (IISc) was dedicated to the problem of visual perception of deep neural networks  and its differences with human visual perception.Despite the fact that both humans and neural networks have a number of things in common in terms of perception, artificial intelligence over the past ten years of its development has only slightly approached the level of perception of reality by the human brain.

“While complex computation is trivial for them, certain tasks that are relatively easy for humans can be difficult for these networks to complete,” the study says. 

For example, smart systems are much more susceptible to vertical mirror confusion over horizontal reflection. Another significant difference is that, unlike artificial intelligence, which perceives all the details at once, a person is inclined to perceive the coarser details in the first place, only then turning to smaller details.

According to the researchers, one of the main goals of the study was to find ways to make the perception of reality by artificial intelligence more similar to the way humans do it.This could significantly increase the resistance of artificial intelligence to adversarial attacks and increase the security of such systems.

Written by: admin

Rate it
Previous post