Towards trusted AI Week 14 – tools for creating simulated attacks

Secure AI Weekly admin todayApril 12, 2021 78

Background
share close

Testing potential risks in a safe environment is essential to building defenses


Microsoft open-sources tool to use AI in simulated attacks

Venture Beat, April 9, 2021

Microsoft has conducted research on machine learning and artificial intelligence for security issues. As part of this research, the company has released a set of tools for researchers that can be used to create open source attacks. They can be used to create simulated network environments and visually track how they deal with attacks.

For example, using CyberBattleSim, you can trace how attackers can move around the network when looking for a vulnerability.  Security specialists can build attack simulations, make various nodes and see what vulnerabilities are in the system, what works and what needs attention. Thus, specialists can simulate an attack in an artificial mustache and see what possibilities are open to the attacker in the existing system and what can be done about it to protect the system.

“The simulated attacker’s goal is to take ownership of some portion of the network by exploiting these planted vulnerabilities. While the simulated attacker moves through the network, a defender agent watches the network activity to detect the presence of the attacker and contain the attack,” commented the Microsoft 365 Defender Research Team.

Adversarial machine learning: The underrated threat of data poisoning

TechTalks, April 5, 2021

Most experts see adversarial attacks as the main problem with machine learning systems. Attackers can manipulate data in such a way as to influence the actions of the system. Most experts see adversarial attacks as the main problem with machine learning systems. However, as researchers began to find ways to combat adversarial attacks, attackers began to intensify their attacks. Thus, it turned into a constant race in which one constantly surpasses the other.

Computer vision is one of the most popular research areas in this matter. Computer vision systems work with a variety of visual information. Attackers can trick such systems, for example, by adding a layer of noise to the image that affects its recognition. Here, “randomized smoothing” is a proven method to combat such attacks based on making machine learning systems more resilient to virtually invisible changes.

Unfortunately, however, such randomized smoothing is not ideal either. According to AI researchers at Tulane University, Lawrence Livermore National Laboratory, and IBM Research, even specially trained systems may not always recognize images modified by intruders. For example, one of the effective tactics that can be used by attackers against specially trained models is data poisoning.

The “How Robust are Randomized Smoothing based Defenses to Data Poisoning?” paper covers the main aspects that were previously overlooked when working with machine learning systems in terms of their security and resistance to attacks.

Written by: admin

Rate it
Previous post