Towards Trusted AI Week 44 – Adversarial attacks against LIDAR and Responsible AI initiatives by Google’s head of AI

Secure AI Weekly + Trusted AI Blog admin todayNovember 1, 2022 113

Background
share close

AI 2030: What to expect, and five areas for investment, from our head of AI

Google Cloud Blog, October 27, 2022
Andrew Moore

The Takeaways from Google’s head of AI must be a great example and role model for other companies. One of the takeaways is focused on Responsible AI.

The future of AI is now. However, no one wants to build an artificial intelligence based product that brings damage. That’s why such systems have to be tested for vulnerabilities to be  developed as a so-called responsible AI.

In addition, AI research practices should be created, and businesses must deal with responsible and explainable AI only. 

This is why it’s essential to understand the behavior of an AI during early stages, particularly design. It hasn’t to be an afterthought. 

 

Mitigating the risks of artificial intelligence compromise

HelpNetSecurity, October 27, 2022
Thorsten Stremlau

Artificial intelligence can not only serve for good. Adversaries are actively using AI for malicious purposes. And their techniques have become increasingly complicated.

One can look at the four elements of machine learning (ML) to evaluate the risks of cyberattack and apply protective solutions:

  • data sets;
  • algorithms;
  • models;
  •  training.

Author suggests the best technique to mitigate AI risks is a “trusted computing” model that highlights all four above-mentioned AI elements.

Also it’s necessary not only to invest in up-to-date technologies but to educate teams on how to detect threats and establish defense. In this case you ensure that any financial and reputational losses and damages never happen.

Read more following the link.

 

Laser attack blinds autonomous vehicles, deleting pedestrians and confusing cars

TechXplore, October 31, 2022
University of Florida

Autonomous cars need to monitor the environment and identify objects that are around them. Obviously, it helps avoid obstacles and keep passengers safe as well as other traffic participants.

Sophisticated self-driving cars can use lidar. This is a device serving as the eyes of the car. It sees the distance to objects so the car can choose safe actions, but seems to be imperfect. According to recent research conducted by researchers from the University of Florida, the University of Michigan and the University of Electro-Communications in Japan, lidar systems can be fooled.

New research reveals that AI-driven lidar sensors can be tricked and delete data about obstacles like pedestrians thus making them invisible to the lidar. The researches demonstrate that defense strategies aren’t enough and  provide modifications that should address this issue.

Read the article following the link.

 

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post