AI had spread all over the existing spheres of human activity, which means that sometimes its security is a question of life and death.
Backdoor attacks need no triggers
TechTalks, November 5, 2020
It is currently known that a bad actor can attack an AI system in several ways, and one of them is to perform a backdoor attack. An attack of this type is based on hiding malicious behavior in an ML model during its training; then, the behaviour gets activated during the production phase. Up to now, it was believed that backdoor attacks were quite difficult to be made in real life as one of the major conditions of their performance laid in visible triggers. Still, the recent research by AI scientists at the CISPA Helmholtz Center for Information Security, Germany, demonstrates that it is possible to make ML backdoors almost undetectable.
This technique of attack on deep neural networks was called the “triggerless backdoor,” which can be performed in any setting without a visible activator.
“This attack requires additional steps to implement,” said Ahmed Salem, lead author of the paper. “For this attack, we wanted to take full advantage of the threat model, i.e., the adversary is the one who trains the model. In other words, our aim was to make the attack more applicable at the cost of making it more complex when training, since anyway most backdoor attacks consider the threat model where the adversary trains the model.”
The technique is going to be presented at the ICLR 2021 conference.
Medical AI security is the question of life and death
The Conversation, November 3, 2020
Back in September 2020, a woman died as a result of a cyberattack in a hospital of Düsseldorf, Germany This case was the first reported death caused by a hacker’s malicious actions demonstrating that hacks can do harm not only to our wealth or personal data, but even to our lives. Unfortunately, even the best AI used in the medical sphere can be successfully attacked: according to the results of a recent research, almost ingenious AI-based systems can fall victim to smart attackers putting at risk lives of critical patients. For instance, an attacker can block life-saving appliances, manipulate diagnostic results, change drug doses, have an effect on critical moves in an operation or perform any other actions leading to lethal outcomes.
Performing an input attack on medical AI devices, a malefactor can manipulate the pixel value of an MRI scan confusing the AI system in different ways, while the human eye won’t be able to detect the changes. All the actions of the third party will go completely undetected as the AI system itself and the way it works will be left visibly untouched. In other words, it can be extremely difficult to tell that something goes wrong before a fatal incident takes place, making further researches in the sphere of vulnerabilities in medical AI extremely important.
Manipulating fake news detectors via comments
Help Net Security, November 6, 2020
Popular social networks, such as Twitter and Facebook, usually use fake news detectors for warnings users about misleading feed. Still, a new research from Penn State’s College of Information Sciences and Technology demonstrates how fake news detectors can be fooled.This can be done with the help of user comments so that true news were flagged as false and vice versa. Such approach can provide adversaries with an opportunity to affect the detector’s assessment without even being the author. The framework developed buy researchers was called Malcom and it can make up and add malicious comments relevant to the text and able to fool the detector.
“Our model does not require the adversaries to modify the target article’s title or content,” commented Thai Le, lead author of the paper and doctoral student in the College of IST. “Instead, adversaries can easily use random accounts on social media to post malicious comments to either demote a real story as fake news or promote a fake story as real news.”