Towards trusted AI Week 41 – the ways hackers use AI

Secure AI Weekly admin todayOctober 11, 2020 61

Background
share close

Using AI remind using a knife: you can either cut yourself or cook a nice dinner.


Most popular ways malefactors use smart technologies 

TechRepublic, October 7, 2020

Cybersecurity professionals state that AI and ML are widely used by malefactors to perform breaches, and there are thee most popular easy to do this.

First, when it comes to data poisoning, a trained model is tricked into  performing in a wrong way. The attack may target either  a ML algorithm’s availability or its integrity.

Second, there are Generative Adversarial Networks (GANs), which are in fact two AI systems fighting against each other. The first one will simulate the original content and the second one will detect its mistakes. The systems will finally come up with content convincing enough to pass for the original. 

Finally, attackers can simply manipulate bots. “Attackers went in and figured out how bots were doing their trading and they used the bots to trick the algorithm,” said Panelist Greg Foss, senior cybersecurity strategist at VMware Carbon Black. “This can be applied across other implementations.”

AI-based verification method demands extra security

Technology Networks, October 8, 2020

Automated facial recognition is a very popular method of personality verification based on the analysis of facial characters. A facial recognition system analyzes patterns from the faces of a large number of people, after that the system can be used to identify people both in virtual reality and in real life. The technology got widespread use – from social networks to law enforcement. 

Still, face recognition as any other smart technology, has a number of pitfalls, and security issues are one of them. Working with giant databases of so-called “face templates”, face recognition systems can give a malefactor access to a large amount of critical data if hacked. That is why it is of a great importance that these systems were secured properly.  

Failure detection rate for safety-critical AI-based systems

Venture Beat, September 15, 2020

With great power there must also come great responsibility, and it is especially true when it comes to safety-critical smart systems, such as the ones in autonomous vehicles, robotic surgery, and autonomous flight systems for planes. If something goes wrong in them due to a system mistake or a hack, it will be a real matter of life and death. Researchers from MIT, Stanford University, and the University of Pennsylvania have worked on a neural bridge sampling method aiming to predict failure rates of such critical systems. The method gives regulators and industry experts a reference for assessing the risks that may occur along with introducing complex ML systems in safety-critical spheres. 

“They don’t want to tell you what’s inside the black box, so we need to be able to look at these systems from afar without sort of dissecting them,” co-lead author Matthew O’Kelly explained. “And so one of the benefits of the methods that we’re proposing is that essentially somebody can send you a scrambled description of that generated model, give you a bunch of distributions, and you draw from them, then send back the search space and the scores. They don’t tell you what actually happened during the rollout.”

Written by: admin

Rate it
Previous post