‘The History of Adversarial AI’ talk at HITB
Alex Polyakov and Eugene Neelou talked about the history of AI security at the HITB security conference. HITBSecConf or the Hack In The Box Security Conference is an annual must-attend ...
Secure AI Weekly admin todayMay 31, 2021 68
AI applications need higher accuracy, less biases and more ethics to function properly
Analytics India Magazine, May 30, 2021
Facebook wants to influence the industry to more rigorously evaluate NLP in a real-world setting with a customization of a new ‘Dynascore’ metric. The metric allows tailoring an evaluation of performance in a comprehensive way. While the model is running, the metric can determine which examples mislead the model the most, while affecting the quality of the predictions.
“We hope Dynabench will help the AI community build systems that make fewer mistakes, are less subject to potentially harmful biases, and are more useful and beneficial to people in the real world,” said Facebook.
Insider, May 30, 2021
A UN report reports that a weaponized drone designed for asymmetric warfare and anti-terrorist operations “hunted down a human target” without any prior instructions. Back in 2020, KARGU-2 attacked a soldier during a conflict in Libya between the government and a breakaway military faction.
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the report from the UN Security Council’s Panel of Experts on Libya commented.
In this way, the case has become a clear illustration of how important it is to control modern AI technologies, especially those whose functions can harm a person.
Venture Beat, May 29, 2021
Adversarial machine learning is no longer a completely new technique, but it continues to grow in popularity, which is also causing more and more alarm among AI researchers.It is no secret that the widespread use of smart technologies also increases the risks of such attacks, the consequences of which can be devastating to a large extent.
The article contains a clever overview of the basic data on adversarial attacks, sheds light on various types of adversarial attacks, such as evasion, poisoning and model extraction. In addition, the article also provides an overview of ongoing research in this area, as well as discusses available methods of protection against potential threats. Despite the fact that such attacks are mainly carried out in laboratories, the threat is quite real, and the topic requires careful study in the future.
Written by: admin
Company News admin
Alex Polyakov and Eugene Neelou talked about the history of AI security at the HITB security conference. HITBSecConf or the Hack In The Box Security Conference is an annual must-attend ...
Adversa AI, Trustworthy AI Research & Advisory