Towards Trusted AI Week 2 – DARPA’s public tools teach AI developers to defend against attacks, and others

Adversarial ML admin todayJanuary 11, 2022 183

Background
share close

Machine learning has come a long way, but it needs to meet safety criteria


Adversarial Machine Learning: A Beginner’s Guide to Adversarial Attacks and Defenses

Hackernoon, January 9, 2022

The article discusses the basic principles of machine learning and describes its basics in simple terms.

Adversarial machine learning itself is directly related to the development of machine learning algorithms. In this paradigm, there are four types of attacks that machine learning models can be subjected to. 

A model extraction attack is based on the fact that an attacker steals a copy of a remotely deployed machine learning model by sending queries to the target model with inputs to extract as much information as possible and with a set of inputs and outputs to train the model. Inference attacks are capable of modifying the information flow of a machine learning model. Thus, an attacker can obtain information about the model that is clearly not intended to be shared.

Poisoning attacks involve an attacker inserting corrupted data into a training dataset. In this way, third parties manage to compromise the target machine learning model during training. Finally, as part of an evasion attack, an attacker injects a small disturbance into the input of a machine learning model. This results in an incorrect classification. This type of attack is similar to poisoning attacks, but dodge attacks try to exploit the model’s weaknesses during the inference phase.

The article describes in more detail both the types of attacks themselves and those available on. the present moment the principles of dealing with them. The text is also available in podcast format.

DARPA’s New Public Tools Teach AI Developers to Defend Against Attacks

Airforcemag, January 7, 2022 

For the military to trust artificial intelligence, the technology must be sufficiently secure. Developers received an open source toolkit to learn new defensive techniques and test their products against simulated attacks.

“We’re trying to get the knowledge out so developers can build systems that are defended,”  said Draper, DARPA’s program manager for the newly available set of tools called GARD, which stands for Guaranteeing AI Robustness against Deception. The new program will focus on the military in particular. In turn, DARPA brought together researchers from IBM, Two Six Technologies, MITER Corp., the University of Chicago, and Google Research to create the GARD.

Building on the open source library of tools and techniques from IBM, the Adversarial Robustness Toolbox, GARD has worked to improve it.

Google Research has introduced a self-learning repository with “test dummies” to enable developers to learn general approaches to defensive AI.

The datasets in the GARD APRICOT library also help developers attack and defend their own systems.

“How do you vet that—how do you know if it’s safe?” Draper commented. “Our goal is to try to develop these tools so that all systems are safe.”

From viral fun to financial fraud: How deepfake technology is threatening financial services

Fintech Futures, January 6, 2022

We have long understood that deepfake technologies are not as harmless as they might seem at first glance, and here’s another confirmation of this. Internet fraud continues to evolve steadily and as services in the financial sector become more automated, fraudsters have taken advantage of this opportunity.

With the outbreak of the pandemic in 2020, online fraud in the UK has grown by a third, and financial crime in the US is estimated to be up to $ 3.5 trillion a year. As security measures evolve, criminals are inventing more and more fraudulent tactics, and now deepfake technologies have entered the game.

Ghost fraud involves criminals using the personal information of a deceased person for financial gain. For example, it can be used to access online services, access savings accounts and earn credit points, and apply for cars, loans or benefits. In addition, criminals can also file fraudulent insurance or other claims on behalf of the deceased. In addition, deepfakes can be used to convince an official that a customer is still alive.Synthetic Identity Fraud is the most sophisticated deepfake tactic. It is extremely difficult to detect, as in the attack, criminals combine fake, real and stolen information to create a new person who does not exist in reality.

These are not all the ways that scammers have learned how to carry out attacks in the financial sector. To find out more, read the article here.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post