Towards trusted AI Week 19 – tools fooling facial recognition systems

Secure AI Weekly admin todayMay 17, 2021 67 5

Background
share close

Poison and antidote: a list of tools capable of deceiving facial recognition systems


Top 8 AI-Powered Privacy Tools To Fool Facial Recognition Systems

Analytics India Mag, May 12, 2021

Despite its prevalence, facial recognition is one of the most controversial technologies in artificial intelligence: if misused, it is especially capable of jeopardizing data privacy. Here is a list of AI-powered privacy tools aiming to ‘fool’ the technology.

Fawkes has been developed by  the University of Chicago’s Sand lab. The tool can very carefully change pixels in photos to make them unrecognizable by face recognition systems. At the same time, the images seem unchanged for the human eye.

Then, Anonymizer developed by Generated Media startup is based on the GAN escape detection from facial recognition software. Using the program, the user can create fake images based on the original photo. Users can post them to social networks, while the person in the photos will not be recognized.

These are one of the few examples of such programs. Follow the link to the article and check out other tools for cheating face recognition systems.

Worried About Privacy for Your Selfies? These Tools Can Help Spoof Facial Recognition AI

Gadgets 360, May 10, 2021

Another recent article was devoted to the security of personal data, namely photos posted on social networks. Indeed, how often do you think about whether it is safe to post another photo on Facebook?

Two developments capable of deceiving facial recognition systems were virtually presented at the International Conference of Learning Representations (ICLR). In general, most of these tools are based on the principle of introducing imperceptible changes to the image, making the photograph unrecognizable for the system.

One of the developments presented at the conference is Fawkes, which you already know about from the article above. Another invention unveiled at the conference is the LowKey, a trendy review to the point where they can mislead pre-trained commercial AI models.

The tool is described in detail in the article “Using Adversarial Attacks to Protect Social Network Users from Face Recognition” and is available for use on the Internet.

AI consumes a lot of energy. Hackers could make in consume more

MIT Technology Review, May 6, 2021

The attack is similar in principle to denial of service attacks and can force deep neural networks to use additional resources to function. Due to this, the entire operation of the system slows down and, as a result, its performance decreases.

A new type of attack can affect the уenergy consumption of artificial intelligence systems. The attack is similar in principle to denial of service attacks and can force deep neural networks to use additional resources to function. Due to this, the entire operation of the system slows down and, as a result, its performance decreases.

 

Written by: admin

Rate it
Previous post