Towards trusted AI Week 22 – The annual ML security evasion competition

Secure AI Weekly admin todayJune 7, 2021 58

Background
share close

Artificial intelligence needs comprehensive study


Machine Learning Security Evasion Competition 2021 Calls for Researchers and Practitioners

Cujo AI, May 31, 2021

The Machine Learning Security Evasion Competition 2021 (MLSEC2021) is an annual event bringing together ML practitioners and security researchers who compete in the defender challenge and the attacker challenge: here participants can test their defensive and attacking skills against ML models.

The event was originally organized by Hyrum Anderson, Principal Architect and Ram Shankar Siva Kumar, Data Cowboy in Azure Trustworthy Machine Learning at Microsoft, Zoltan Balazs, Head of Vulnerability Research Lab at CUJO AI, Carsten Willems, CEO at VMRay, and Chris Pickard, CEO at MRG Effitas. 

“Today, there are probably only a handful of people who are experts in both ML and cybersecurity. I am excited to be offering a unique learning opportunity for those getting started who would like to make a name for themselves in this field,” commented Balazs.

Are MRI Scans Done By AI Systems Reliable?

Analytics India Mag, May 31, 2021

Artificial intelligence is widely used in the medical field. Researchers from Facebook AI and NYU Langone Health released FastMRI and have recently came up with a way to use artificial intelligence to enhance MRI scans. However, the question arose about how effective this method is in comparison with the common diagnostics of such scanners. Some experts believe that AI-powered scanners are less accurate and are prone to a variety of input inaccuracies.

A group of researchers Including Fernando Pérez-García, Rachel Sparks, Sebastien Ourselin released a paper “TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning” covering the important issues of the application of AI in MRI scans. For example, the work studied the robustness of the scans against small adversarial perturbations. 

“Our main finding is that all reconstruction methods perform worse on the new MRI samples, but by a similar amount. Moreover we find that challenging images are naturally difficult to reconstruct, since both trained and untrained methods are equally prone to this shift,” commented  the researchers.

The rush to commercialize AI is creating major security risks

The Next Web, June 6, 2021

DeepSloth targeting “adaptive deep neural networks” has been presented by a team of researchers from the University of Maryland at this year’s International Conference on Learning Representations (ICLR). The case is especially interesting because the researchers presented a vulnerability in a technique they themselves had developed two years earlier. Despite the abundance of research in the field of artificial intelligence, some experts believe that the vector of research in the field is directed a little in the wrong direction.

“If we look at the papers proposing early-exit architectures, we see there’s no effort to understand security risks although they claim that these solutions are of practical value,”  Yigitan Kaya, a Ph.D. student in Computer Science at the University of Maryland says. “If an industry practitioner finds these papers and implements these solutions, they are not warned about what can go wrong. Although groups like ours try to expose potential problems, we are less visible to a practitioner who wants to use an early-exit model. Even including a paragraph about the potential risks involved in a solution goes a long way.”

Written by: admin

Rate it
Previous post