Towards Trusted AI Week 24 – AI Red Teaming discussed at RSA

Secure AI Weekly admin todayJune 15, 2022 174

Background
share close

Knowledge about artificial intelligence and its security needs to be constantly improved


How Do We Attack AI? Learn More at Our RSA Panel!

Cloud Security Podcast by Google, June 6, 2022

Cloud Security Podcast by Google held a conversation with Google Research Scientist Nicholas Carlini, during which Nicholas spoke about his vision of the security of AI and answered the relevant questions. This is the second time the topic of Secure AI is covered on this podcast. The previous one was considered by Deepmind’s CISO. So the topic is definitely getting momentum.

Among other things in the podcast, the topic of a threat model for a large-scale AI system and assessed attacks were explored. Nicholas talked about possible types of attacks and the threats associated with training machine learning systems on large datasets. For instance, the data was taken from the Internet thus couldn’t be trusted. Also Nicholas focused on three main issues researchers are worried about, namely Poisoning, Evasion, and Extraction attacks.

During the conversation, Nicholas talked about the future of safe and secure AI, potential threats, and how the “opacity” of AI affected the actions of attackers and defenders.

Overall it’s highly recommended to listen.

Listen to the full podcast record via the link.

Red Teaming AI Systems: The Path, the Prospect and the Perils

RSAC, June 7, 2022

Google, Microsoft and MITER met at RSA, the world’s largest security conference, to discuss and understand the ubiquity of Red Teaming AI systems.

Specialists from Google, Microsoft, and MITER have tried to figure out if Red Teaming AI systems are the future, or if they’re just a whim. During the conference, they have to answer a number of topical questions:

– Are Red Teaming AI systems necessary, and what are the weights for organizations?

– What is the easiest way to protect AI systems in enterprises and what steps need to be taken for this?

The fact that Secure AI topic was at the roundtable of the most popular Cybersecurity conference and brought experts from top industry players is definitely a sign that Securing AI is a must-have.

DARPA Seeks Proposals to Improve Trustworthy AI

Meritalk, June 7, 2022

To wrap up, the interest on Secure AI, DARPA is continuing its Research towards Trustworthy AI

As part of its latest Assured Neuro Symbolic Learning and Reasoning (ANSR) program, the Defense Advanced Research Projects Agency (DARPA) is aiming for problem-solving proposals in artificial intelligence (AI) architectures and algorithms that integrate symbolic reasoning with data-driven machine learning. Thus, the Agency seeks to create trustworthy and reliable systems based on AI.

In a press release, DARPA ANSR Program Manager Sandeep Neema, said: “Motivating new thinking and approaches in this area will help ensure that autonomous systems operate safely and work properly”, “This will be an integral part of trust, which is the key to a successful acceptance of autonomy by the Ministry of Defense.”

What is the overall goal of the ANSR program? It is the refinement of hybrid AI algorithms and the elaboration of methods based on evidence that supports confident veracity judgments for these algorithms.

Even with recent improvements in machine learning algorithms and guaranteed technologies, high-level autonomy continues to be elusive. Based on the words of DARPA officials, the reason for this is twofold:

  • On the one hand, data-driven machine learning flows robustness, interpretability, and transparency, while computational and data needs are unstable;
  • On the other hand, standard approaches to the development of intelligent applications and autonomous systems based on knowledge representation and symbolic reasoning, can be guaranteed, but remain unstable to the uncertainties encountered in the real world.

Read more about DARPA ANSR Program in the article.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post