Towards Trusted AI Week 29 – malware gets hidden inside AI’s ‘neurons’

Secure AI Weekly admin todayJuly 26, 2021 39

Background
share close

Without proper attention from experts, AI can become a weapon in the hands of attackers


Researchers Hid Malware Inside an AI’s ‘Neurons’ And It Worked Scarily Well

Vice, July 22, 2021

It is expected that neural networks will become more widespread in the future, and then they can become a new weapon of attackers. According to a recent study, malware can be embedded directly into artificial neurons – without disrupting the activity of the neural network, and it can continue to perform its previous tasks. The research was carried out on the AlexNet model, when about 50 percent of the neural network was replaced with malware. It was found that changes were not detected in some models by shorthand, malware was also not recognized by 58 common antivirus systems.

“As neural networks become more widely used, this method will be universal in delivering malware in the future,” commented the researchers from the University of the Chinese Academy of Sciences.

Deep Instinct Further Shields Businesses Against Cyberattacks as Ransomware Attacks More than Double

Business Wire, July 22, 2021

Deep Instinct is the first cybersecurity company to introduce a threat prevention platform based on an innovative end-to-end AI deep learning framework. Last week, the company announced its Annual Cyber Threat Landscape report had been updated.

In addition, the Deep Instinct’s prevention-first solution has been enhanced with new features and capabilities. Particularly interesting are the increased resistance to malicious machine learning attacks, credential theft protection, and malicious behavior reporting.

“Sophisticated attackers are evolving their methods to trick basic AI solutions and mislead Machine Learning algorithms. Deep Instinct’s advanced AI with deep learning is the most advanced solution to protect businesses from these anti adversarial attacks,” commented Guy Caspi, Deep Instinct CEO. “We are bringing a never-before-seen level of prevention to our customers at a time when ransomware attacks are at an all-time high.” 

The Pentagon Is Bolstering Its AI Systems—by Hacking Itself

Wired, July 19, 2021

According to the Pentagon, AI is a good way to outwit an attacker, but artificial intelligence systems are very fragile and need to be closely monitored to keep enemies from attacking. The Pentagon’s he Joint Artificial Intelligence Center recently formed a unit to work with open source and industry machine learning models; at the same time, one of the goals was to develop the use of AI for military purposes.

As part of the initiative, the Test and Evaluation Group will test pre-trained models for vulnerabilities while  another team of cybersecurity experts will check AI code and data for hidden loopholes. 

“For some applications, machine learning software is just a bajillion times better than traditional software,” says Gregory Allen, director of strategy and policy at the JAIC, adding that machine learning “also breaks in different ways than traditional software.”

Written by: admin

Rate it
Previous post