Towards trusted AI Week 43 – the new Matrix is breaking the ground

Secure AI Weekly admin todayOctober 25, 2020 122

Background
share close

Malefactors are never out of the game, but neither researchers are: new ways of combating attacks on AI are already available.


New Martix aiming to battle against adversarial attacks 

Venture Beat, October 22, 2020

Based on the data provided by a Gartner report, by 2022, 30% of all cyberattacks onAI-based systems will be either training-data poisoning, model theft, or adversarial samples. According to another research, the majority of organizations responding to the survey claimed they didn’t own proper tools to ensure their ML models security.

13 organizations came up with the Adversarial ML Threat Matrix, an industry-focused open framework.This was developed for security specialists to make the process of detecting, responding to, and remediating threats against ML systems much easier. Among the 13 companies participation in the projects, there were Microsoft, MITRE Corporation, IBM, Nvidia, Airbus, and Bosch. Microsoft representatives commented that the company collaborated with MITRE to analyze the approaches used by bad actors against machine learning models. 

“The Adversarial Machine Learning Threat Matrix will help security analysts think holistically. While there’s excellent work happening in the academic community that looks at specific vulnerabilities, it’s important to think about how these things play off one another,” commented Mikel Rodriguez, MITRE’s representative. 

Harvard professor launching Robust Intelligence startup

Forbes, October 21, 2020

Yaron Singer is currently a Harvard professor with a 7-years hard work work on adversarial machine learning. Nowadays Singer is a founder of a startup, Robust Intelligence. The startup was formed together with a Ph.D. advisee and two students from the past. The startup platform is said to be able to detect over 100 different adversarial attacks. 

The startup has already come with two products: an AI firewall and a “red team” offering called Rime, which behaves like an adversarial attacker  performing a stress test on a client’s AI model. Now, Robust Intelligence is collaborating with some 10 customers, including an influential financial organization and a major payment processor. 

“Once you start seeing these vulnerabilities, it gets really, really scary, especially if we think about how much we want to use artificial intelligence to automate our decisions,” says Singer.

Fooling popular object detection cameras with a coloured beanie

ZDNet, October 23, 2020

Researchers from Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, the Australian Cyber Security Cooperative Research Centre (CSCRC), and South Korea’s Sungkyunkwan University have demonstrated how certain triggers could be vulnerabilities in AI-based security cameras. According to the specialists, even a piece of clothing on condition that it is of some particular colour can potentially be used by a third party to easily fool a YOLO object detection camera.

First, a red beanie was used to make on object digitally disappear for the YOLO camers’s perception: while the camera could detect the person at first,  wearing this accessory made the subject  undetectable for the camera. The same result was demonstrated during the experiments with T-shirts of different colors.

 “The problem with artificial intelligence, despite its effectiveness and ability to recognise so many things, is it’s adversarial in nature,” said Data61 cybersecurity research scientist Sharif Abuadbba. 

Written by: admin

Rate it
Previous post