Trusted AI Blog

317 Results / Page 34 of 36

Adversarial ML admin / October 31, 2020

Attacking object detection models with dynamic patches

Despite the fact that deep neural networks have become an integral part of highly vulnerable activities, they are highly susceptible to adsarial attacks. We have compiled a selection of the most interesting studies for October 2020. Dynamic Adversarial Patch for Evading Object Detection Models Deep neural networks (DNN) have become ...

todayOctober 11, 2020

  • 61
close

Secure AI Weekly admin

Towards trusted AI Week 41 – the ways hackers use AI

Using AI remind using a knife: you can either cut yourself or cook a nice dinner. Most popular ways malefactors use smart technologies  TechRepublic, October 7, 2020 Cybersecurity professionals state that AI and ML are widely used by malefactors to perform breaches, and there are thee most popular easy to ...