Towards Trusted AI Week 29 – Bias in AI, accident or intentional harm?

Secure AI Weekly admin todayJuly 20, 2022 122

Background
share close

New adversarial mask designs evade facial recognition systems

BiometricUpdate, July 15, 2022

Israeli researchers from Ben-Gurion University of the Negev and Tel Aviv University have found that cloth face masks with adversarial patterns that cover the nose and mouth can dodge facial recognition systems in at least 96% cases.

Facial recognition systems were taken aback by the widespread use of masks due to the COVID-19 pandemic, but soon they caught up, albeit with mistakes. Researchers from Israel’s Ben-Gurion University of the Negev and Tel Aviv University took on a withstanding role and tested whether they could develop a particular patch or mask that would work against existing deep learning facial recognition models.

The research participants strolled the corridor wearing various control masks, such as standard blue surgical masks and masks with realistic human features. In all those cases, the faces of the participants were recognized successfully. Nonetheless, when wearing an adversarial pattern printed on both paper and cloth masks, the facial recognition algorithms detected but failed to recognize the face. It didn’t find a match, so wearing it would not bring distrust.

The researchers’ slant is described as “a gradient-based optimization process to create a universal perturbation (and mask)“. In other words, anyone can wear the same patch. Visually, this adversarial pattern is similar to the lower half of the Cinco de Mayo skull drawings with colorful colors against the skin tone.

 

Adversa AI Red team researchers confirmed that such techniques were already implemented in technologies such as Adversarial Octopus. It’s always a pleasure to see yet another research increasing awareness for securing AI.

Government Launches Defense Center for AI Research

ItPro, July 15, 2022

As part of a plan to make the UK a world leader in AI research, the Defense Science and Technology Laboratory (Dstl) and the Alan Turing Institute have announced the creation of a new Defense Center for AI Research(DCAR), which will be part of Defense Artificial Intelligence Center (DAIC).

A key challenge for DCAR is to address areas of artificial intelligence (AI) development that are currently an issue. In addition, the Center has a number of operational challenges planned, including low-short learning, where machines can be trained without the need for large data models, AI war gaming, multiply sensor management, and AI ethics.

The Ministry of Defense (MoD) has published a paper recognizing the security potential of AI and outlining its plan for the approximate development in this area, taking into account safety and ethics. In the document, the MoD states that its “vision is that, in terms of AI, we will be the world’s most effective, efficient, trusted and influential Defence organisation for our size.” 

This document is proposed to be read in combination with the Defense Artificial Intelligence Strategy, a document that outlines the large-scale research, development, and deployment of AI that the Ministry of Defense considers key to its mission in the coming years.

The issues of safe and reliable AI have long been at the government level and continue to expand.

Bias in Artificial Intelligence: Can AI be Trusted?

SecurityWeek, July 06, 2022

Nowadays more and more incidents related to ethics and bias in AI are expected or already happened. But what will go  wrong in case attackers predict such bias? How can they realize it? What should we do to protect machine learning models and make AI reliable and trusted?

In June 2022, Microsoft released the new Microsoft Responsible AI Standard to define the core product development requirements for responsible AI. However, the standard contains just one mention of bias in AI algorithm developers should be aware that users may over-rely on AI results. In other words, it seems, Microsoft is concerned about the bias from users specifically targeting its products, not bias within its products that could negatively impact the users.

At the same time, CEO and founder of Adversa AI Alex Polyakov, is more concerned about the deliberate misuse of AI systems by manipulating the learning process of the systems: “Research studies conducted by scientists and demonstrated by our AI red team during real assessments of AI applications, prove that sometimes in order to fool an AI-driven decision-making process, be it computer vision, natural language processing or anything else, it’s enough to modify a tiny set of inputs.

As an example, Alex referred to the phrase: “eat shoots and leaves,” where, depending on the location of the punctuation mark, namely the comma, it can change the meaning a terrorist or a vegan. He said: “The same works for AI, but the number of examples is enormous for each application, and finding all of them is a big challenge.” 

Alex and our Red Team have demonstrated twice how easy it is to fool AI-based facial recognition systems. For instance, people can trick the system into believing they are Elon Musk; and the team has shown how a visually lonely picture can be recognized as 100 different people.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post