Biased AI can cause significant damage to reputation and business
Venture Beat, June 25, 2021
It’s no secret that biased AI can significantly damage public trust in AI.
The National Institute of Standards and Technology (NIST), the U.S. agency has recently released a document outlining feedback and recommendations for dealing with the risk of bias in AI. The work describes an approach meant for management and identification of “pernicious” biases. The demonstrated framework gives recommendations on how to spot and address AI biases at any point of a system’s lifecycle – from conception to release.
“Technology or datasets that seem non-problematic to one group may be deemed disastrous by others. The manner in which different user groups can game certain applications or tools may also not be so obvious to the teams charged with bringing an AI-based technology to market,” the NIST paper says.
The Register, June 18, 2021
Researchers at the Ubiquitous System Security Lab of Zhejiang University and the University of Michigan’s Security and Privacy Research Group have found a way to blind autonomous vehicles to obstacles. For these purposes, they only need simple audio signals. The researchers have presented Poltergeist, which is an attack against camera-based computer-vision systems applied in autonomous vehicles.
With the help of special audio, the attack triggers the image stabilisation functions of the camera sensor. As a result, the image gets blurred making obstacles invisible to the ML system.
“Autonomous vehicles increasingly exploit computer-vision based object detection systems to perceive environments and make critical driving decisions,” the authors explain. “To increase the quality of images, image stabilisers with inertial sensors are added to alleviate image blurring caused by camera jitter.”
CIO Insight, June 21, 2021
This is another article shedding light on the basic principles of adversarial attacks. The topic of adversarial attacks is extremely important as they can potentially cause significant damage to reputation and business. According to Gartner’s prediction, by 2022, a third of cyberattacks will contain elements of adversarial attacks, such as poisoning, for example.
The article reveals the main essence of the adversarial attacks, and explains what types of attacks exist within this category, for example, Poisoning or Evasion. In addition, the article describes in detail the main risks associated with adversarial attacks, and also describes one of the main life examples of such an attack – an attack with Microsoft’s Tay Twitter bot in 2016. Finally, the authors provide a number of recommendations in order to protect a free system from such attacks.