Towards trusted AI Week 21 – Facebook customozes ‘Dynascore’ metric

Secure AI Weekly admin todayMay 31, 2021 69

Background
share close

AI applications need higher accuracy, less biases and more ethics to function properly


Facebook Launches An Evaluation-As-A-Service Framework For ML Models

Analytics India Magazine, May 30, 2021

Facebook wants to influence the industry to more rigorously evaluate NLP in a real-world setting with a customization of a new ‘Dynascore’ metric. The metric allows tailoring an evaluation of performance in a comprehensive way.  While the model is running, the metric can determine which examples mislead the model the most, while affecting the quality of the predictions.

“We hope Dynabench will help the AI community build systems that make fewer mistakes, are less subject to potentially harmful biases, and are more useful and beneficial to people in the real world,” said Facebook.

A rogue killer drone ‘hunted down’ a human target without being instructed to, UN report says

Insider, May 30, 2021 

A UN report reports that a weaponized drone designed for asymmetric warfare and anti-terrorist operations “hunted down a human target” without any prior instructions. Back in 2020, KARGU-2 attacked a soldier during a conflict in Libya between the government and a breakaway military faction.

“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the report from the UN Security Council’s Panel of Experts on Libya commented.

In this way, the case has become a clear illustration of how important it is to control modern AI technologies, especially those whose functions can harm a person.

Adversarial attacks in machine learning: What they are and how to stop them

Venture Beat, May 29, 2021 

Adversarial machine learning is no longer a completely new technique, but it continues to grow in popularity, which is also causing more and more alarm among AI researchers.It is no secret that the widespread use of smart technologies also increases the risks of such attacks, the consequences of which can be devastating to a large extent.

The article contains a clever overview of the basic data on adversarial attacks, sheds light on various types of adversarial attacks, such as evasion, poisoning and model extraction. In addition, the article also provides an overview of ongoing research in this area, as well as discusses available methods of protection against potential threats. Despite the fact that such attacks are mainly carried out in laboratories, the threat is quite real, and the topic requires careful study in the future.

Written by: admin

Rate it
Previous post