Towards trusted AI Week 49 – securing our AI systems

Secure AI Weekly admin todayDecember 6, 2020 27

Background
share close

As attacks against AI become more sophisticated, experts come up with new ways to combat them


 Five ways to secure your smart systems  

TechBeacon, December 3, 2020

It is difficult to find a field of human activity in which artificial intelligence systems are not yet involved. At the same time, attacks against artificial intelligence are becoming more sophisticated. AI security experts offer five guidelines to help developers and application security professionals keep smart systems sусгку more easily.

  1. Raise awareness. It is necessary to raise awareness among the team: without understanding the risks and threats, it is impossible to protect systems from potential threats.
  2. Use three Python libraries – CleverHans, FoolBox, and the Adversarial Robustness Toolbox – as the low-hanging fruit, which are tools used against an AI model to make sure it is not vulnerable to attacks or prone to failures.
  3. Perform attacks on your own systems. In order to find potential vulnerabilities, you need to carry out attacks on your system yourself: then you can find the weaknesses of your system faster than attackers do it.
  4. Use various models and algorithms. This may help you achieve more robust results. 
  5. Don’t forget about bias. Any biases in your system will make it potentially more vulnerable to attacks.

Improving object-recognition models performance

MIT News, December 3, 2020

Computer vision models known as convolutional neural networks can recognize objects almost as accurately as the human eye. Their fundamental difference, however, is the fact that even small changes in the pixels of the image that would be invisible to a person can affect the recognition result.This makes such systems vulnerable to adversarial attacks, which means that changing just a couple of pixels may fool the system. Neuroscientists from MIT, Harvard University, and IBM have come up with a way to keep this vulnerability to a minimum. Researchers proposed a method based on adding to computer vision models a new layer resembling the very first stage of the human visual processing system. According to results of the study, the method remarkably improved robustness of the models against such attacks.

“Just by making the models more similar to the brain’s primary visual cortex, in this single stage of processing, we see quite significant improvements in robustness across many different types of perturbations and corruptions,” commented Tiago Marques, an MIT postdoc and one of the lead authors of the research.

How AI can help cybersecurity in 2021

Forbes, December 5, 2020

The end of the year is not sunburned, and it is already possible to make assumptions about how artificial intelligence will change cybersecurity in 2021. Twenty researchers gave their opinions on how smart technology can affect the industry next year, and here are some of them. 

According to Tej Redkar, Chief Product Officer at LogicMonitor, security and IT operations can work better together if they are AI-powered. 

Ernesto Broersma, Partner Technical Specialist at Mimecast says that cheap phishing attacks will still be performed with the help of machine learning. “Pattern of Life analysis will be further automated and many sophisticated attacks will be generated without human intervention”, he comments. 

Bill Harrod, Vice President of Public Sector at Ivanti emphasised that password-related cyberattacks remain a very common type of attacks. Here AI can help dealing with them a lot as implemented as a new form of authentication.

Written by: admin

Rate it
Previous post