Towards Trusted AI Week 34 – fooling AI in Optical Adversarial Attack

Secure AI Weekly admin todayAugust 30, 2021 90

Background
share close

Even sophisticated technologies such as artificial intelligence can be fooled

 

Researchers Demonstrate AI Can Be Fooled

Bankinfosecurity, August 25, 2021

Researchers at Purdue University have found that smart systems used in image recognition tools can be easily tricked with just a camera, a projector and a PC.

The study describes the so-called OPAD, which stands for Optical Adversarial Attack. In this attack, a projector is used to project calculated patterns that change the appearance of three-dimensional objects for smart systems. During the experiment, a special pattern was projected onto a stop sign, as a result of which the system perceived it as a speed limit sign. According to the researchers, this application of projected patterns can be applied to completely different smart systems in different areas of activity, which greatly increases the potential of this attack.

Biometric Technology Rally Testing Showed Racial Bias Worsened by Face Masks: DHS

 

Find Biometrics, August 27, 2021

In order for smart systems to be truly trustworthy, it is necessary that they are not subject to a variety of biases as much as possible, but at the moment this is a serious problem, and racial ones are one example of such biases.

The officials from the Department of Homeland Security commented that the problem is becoming more serious when it comes to recognizing people wearing masks. Sixty face recognition systems were tested to meet a minimum standard of 95 percent success rate for biometric detection and matching of subjects’ faces. About a third of them achieved this rate when scanning volunteers who identified themselves as black, while more than half met the standard when scanning white volunteers.

Data Poisoning: The Next Big Threat

Security Intelligence, August 26, 2021

Data poisoning in which artificial intelligence technologies are used can pose a serious threat to security software.

According to the RSA 2021 report by Johannes Ulrich, Dean of Research at the SANS Institute of Technology, this is a threat we must all watch out for. In connection with the spread of a new type of threat, it is necessary to learn how to quickly identify data poisoning attacks and prevent them.

“One of the most basic threats when it comes to machine learning is one of the attackers actually being able to influence the samples that we are using to train our models,” Ullrich commented at RSA.

Written by: admin

Rate it
Previous post