Towards Trusted AI Week 48 – Fundamental security gaps in AI models

Secure AI Weekly admin todayNovember 30, 2022 78

Background
share close

SPOOFING AI  LIDAR COULD BLIND AUTONOMOUS VEHICLES TO OBSTACLES

HACKADAY, November 22, 2022

For many decades, people have been successfully driving cars only using their two eyes and two ears as sensors. However, autonomous vehicles have a much wider array of sensors, including radar and lidar, as well as ultrasonic sensors and cameras. And it all works synchronously for the sole purpose of determining the current traffic conditions. However, it is much more complicated to deceive people than autonomous machines.

Now researchers are seriously concerned that LiDAR sensors can be tampered with, for example, hiding obstacles and deceiving unmanned vehicles, thereby leading to accidents or disasters.

LiDAR is the light equivalent of radar technology that sends out laser pulses and picks up light reflected from the environment. It is easy to understand that when impulses return from an object located farther, more time passes than from an object located closer. LiDAR is the gold standard sensor for autonomous driving, as it is more accurate and reliable than radar for detecting objects.

This article provides an overview of a new research paper demonstrating a successful adversarial attack on AI-driven LiDAR sensors using a laser to hide selected objects from “sight”. This attack was called the “Physical Removal Attack” or PRA.

Read the full review on PRA in the article at the link.

Adversarial AI Attacks Highlight Fundamental Security Issues

DarkReading, November 23, 2022

AI learns from certain data, that is, the data that was provided to it as training. This poses a huge threat because the AI ​​lacks context, which makes it possible for attackers to trick systems using unexpected inputs.

Recently, the MLSEC 2022 Machine Learning Security Evasion Competition was held, in which participants tried to change photos of celebrities so that the AI ​​system would recognize them as different people.

According to Eugene Neelou, CTO at Adversa AI,  “Traditional cybersecurity cannot protect from AI vulnerabilities — the security of AI models is a distinct domain that should be implemented in organizations where AI/ML is responsible for mission-critical or business-critical decisions. And it’s not only facial recognition — anti-fraud, spam filters, content moderation, autonomous driving, and even healthcare AI applications can be bypassed in a similar way.

Read the full article at the link.

The Threat of Offensive AI to Organizations

ScienceDirect

Automate tasks, extract information from huge amounts of data and synthesize media files that are very similar to real ones, all this has become available to mankind thanks to AI technology. Nonetheless, as is usually the case, no new technology can do without attempts to use it for malevolent purposes. So with AI technologies, attackers have an opportunity to use them to amplify their attacks and expand their campaigns.

Offensive AI has been covered and discussed before, but there are numerous questions in terms of organizations. For example, who benefits AI more – an attacker or a defender, or, for example, what are the most serious threats to organizations at the moment, and what impact do they have on the future.

The published study is noteworthy because it addressed the issue of Secure AI and vulnerabilities of AI applications. The researchers identified more than 30 offensive AI  capabilities as well as distributed AI threats and clarified who the attackers might be.

Read the study at the link.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post