Towards Trusted AI Week 13 – Inoculating deep neural networks to thwart attacks, and others

Secure AI Weekly admin todayMarch 31, 2022 43

Background
share close

As smart systems become increasingly common in the realm of national security, risk issues require more attention


Immune to hacks: Inoculating deep neural networks to thwart attacks

University of Michigan , March 24, 2022

Have you ever wondered what the aftermath might be a situation where an intruder changes road signs and the autonomous vehicle perceives them incorrectly? 

Fortunately, now the immunity-based neural network protection system can prevent. It was developed by biologists and mathematicians from the University of Michigan. While deep neural networks are a subset of machine learning algorithms, they are used to solve a wide variety of classification problems, such as image identification and machine vision, natural language processing, language translation, and fraud detection. If a hacker changes the input and sends the algorithm down the wrong line of thought, the consequences can be dramatic. To protect the algorithms from such attacks, the Michigan team developed a learning system inspired by the Robust Adversarial Immune.

“RAILS represents the very first approach to adversarial learning that is modeled after the adaptive immune system, which operates differently than the innate immune system,” commented Alfred Hero, the John H. Holland Distinguished University Professor, who co-led the work published in IEEE Access.

Researcher shows progress on explainability in biometric PAD systems

Biometric Update, March 24, 2022

Explaining the solutions of artificial intelligence systems, in particular those used to detect attacks using biometric data, is complex. At the moment, there are complex conceptual problems in this matter related to the application of human understanding to the decisions of automated systems.

Recentky, a computer vision and biometrics researcher Ana Sequeira of INESC TEC has presented ‘’Explaining’ Biometric Recognition and PAD Methods: xAI Tools for Face Biometrics’. The researcher touched upon the need for explicable artificial intelligence, which is the subject of her research at a research institution. 

As part of the work, the main properties necessary for the operation of biometrics and the vulnerability to presentation attacks were considered. The specialist uses the ISO/IEC standards as a starting point and presents attacks with a biometric representation. They are broadly divided into automated attacks, synthetic identities, and human attacks, such as lifeless or altered subjects. At the same time, one of the main problems becomes how models can distinguish unknown fake types of samples from genuine ones? Read more details in the article.

Synthetic Deepfake Actors Are Coming to a Screen Near You

How-To Geek, March 26, 2022

The first film, Roundhay Garden Scene, was made just over 130 years ago – then all the actors were living people, but today everything is not quite so.

The most difficult thing was to reproduce human faces using computer graphics (CG) – even with the help of practical special effects it was not always possible to do this convincingly. Our brains are so good at recognizing faces that we see faces that don’t even exist – so it’s very difficult to get the viewer to mistake for a real face what it is not. However, deepfake technology has gone so far that there are already specifics created with it that can be frighteningly realistic. Amazing realism can be found even in graphics on modern game consoles can be very close to photorealistic faces.

At the moment, there are many examples of high-quality use of deepfakes for movies, games and other entertainment things, but all this is fun until the deepfake becomes a method of blackmail and fraud.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post