Towards Trusted AI Week 14 – Adversarial Attacks Art Exhibition, and othets

Secure AI Weekly admin todayApril 5, 2022 75

Background
share close

Smart systems has come a long way, but it needs to be safe enough


World’s First Adversarial Attack in NFT

Adversa.AI, April, 2022

Adversa.AI has launched a virtual unconventional Art Exhibition of paintings which can deceive artificial intelligence. The exhibition is made up of 100 “Mona Lisa” portraits –  all look almost original for people, though the most popular open source AI-driven facial recognition model recognizes them as 100 different celebrities. 

Looking at the same picture, AI and humans can often see different things – in particular, they can identify different people, characteristics of gender, age, hair color, and even race. The reason for this may be biases and security vulnerabilities in AI, called adversarial examples, which are often used by cybercriminals to hack facial recognition systems, self-driving cars, financial medical imaging algorithms, or any other AI technology. Adversa AI Red team wants to draw public attention to security as well as widespread and easy to exploit AI vulnerabilities.

We just wanted to remind people that AI technologies which we believe can make our life easier may be tricked the same way as humans. It’s our duty to make them more secure, trustworthy and responsible” – added Alex Polyakov, CEO of Adversa AI.

The Exhibition allows anyone without experience in Hacking and AI to test in practice how such atatcks work.Take a look ang guess which celebrity is hidden in each portrait and share it with friends to make a small contribution for Trusted Future.

 

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

Vice, March 1, 2022

An attempt by the tax authorities to use algorithms to combat petty fraud turned out to be a disaster.

Dutch Prime Minister Mark Rutte has announced his resignation after it emerged that 26,000 families had been falsely accused of welfare fraud since 2013. In part, this happened due to errors in the discrimination algorithm. Families were forced to return non-existent debts, many of the people found themselves in financial ruin. After lengthy proceedings, it turned out that the cause of such injustice was a mistake in. smart system. In particular, the automated system discriminated against some taxpayers on the basis of nationality and supposed people with dual citizenship to be probable fraudsters.

While small tax frauds are not uncommon these days, smart algorithms should have helped the state counter them. What happened in the end turned out to be a complete injustice towards poor families.

“Suddenly, with technology in reach, benefits decisions were made in a really unprecedented manner,” commented Marlies van Eck, an assistant professor at Radboud University researching automated decision making in government agencies. “In the past, if you worked for the government with paper files, you couldn’t suddenly decide from one moment to the next to stop paying people benefits.” 

Diverse Team of Experts Develop Defense System for Neural Networks

Unive.AI, March 30, 2022

A team of diverse professionals come together – including engineers, biologists and mathematicians from the University of Michigan – to develop a new defense system for neural networks. The new system is based on the adaptive immune system and is able to protect neural networks from various types of attacks.

Fraudsters are able to correct the input of a deep learning algorithm and send it in the wrong direction. This can also become a serious problem for applications such as identification, machine vision, natural language processing (NLP) and so on. The Robust Adversarial Immune-Inspired Learning System is able to recreate the natural defenses of the immune system – this ability allows it to identify and eliminate suspicious inputs to the neural network. To do this, in the first phase of the work, the biological team studied the adaptive immune system of mice and its response to the antigen. After that, based on these data, a model of the immune system was created.

In the process of work, it turned out that deep neural networks, created by the brain can also mimic the biological process of the mammalian immune system by generating new cells designed to defend against certain pathogens.

“One very promising part of this work is that our general framework can defend against different types of attacks,” said Ren Wang, a research fellow in electrical and computer engineering.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post