Trusted AI Blog

331 Results / Page 29 of 37

todaySeptember 6, 2021

  • 63
close

Secure AI Weekly admin

Towards Trusted AI Week 35 – Facebook apologized for its AI software’s error

AI serves not only for good. Adversaries can use it and advance their attacks Facebook apologizes after its AI software labels Black men ‘primates’ in a video featured on the platform Facebook apologized for the error of its AI-based software Facebook representatives expressed their regrets for the company’s artificial intelligence ...

todaySeptember 2, 2021

  • 461
close

Adversarial ML admin

Best of Adversarial ML Week 34 – Attacking aerial imagery object detector

The Adversa team makes for you a weekly selection of the best research in the field of artificial intelligence security Physical Adversarial Attacks on an Aerial Imagery Object Detector Deep neural networks (DNNs) provide significant assistance in processing of aerial imagery taken with the help of earth-observing satellite platforms. However, since ...

todayAugust 19, 2021

  • 144
close

Adversarial ML admin

Best of Adversarial ML Week 32 – Mitigating robust and universal Adversarial Patch Attack

The Adversa team makes for you a weekly selection of the best research in the field of artificial intelligence security Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack Adversarial patch attack against image classification deep neural networks (DNNs) as within such attacks a malefactor ...

todayAugust 16, 2021

  • 141
close

Secure AI Weekly admin

Towards Trusted AI Week 32 – Feature Importance-Aware Attacks enhance transferability

Machine learning has come a long way, but it needs to meet safety criteria Novel Feature Importance-Aware Transferable Adversarial Attacks Dramatically Improve Transferability Synced, August 10, 2021 Researchers have proposed the Feature Importance-Aware Attacks  able so significantly enhance the transferability of adversarial examples. Deep neural networks are increasingly used in ...