Knowledge about artificial intelligence and its security needs to be constantly improved
TechTalks, September 24, 2021
Machine learning has become more widespread today. However, the main problem in machine learning today is adversarial attacks.
And since this type of attack is quite different from other threats, the first step towards overcoming this threat is understanding it. This article presents the general landscape of adversarial attack and defense using video by Ping-Yu Chen, an artificial intelligence researcher at IBM.
The first is the need to understand the difference between software bugs and adversarial attacks. Such errors are not uncommon when it comes to programming. While common threats are now fairly easy to detect with anti-virus applications, adversarial attacks have a number of significant differences, and it is extremely important to understand what we are dealing with.
The next step is to understand the implications of adversarial attacks. For example, in areas where machine learning is used in mission-critical applications, adversarial attacks can even pose a risk to human health and life. The following thing is to understand the weaknesses of machine learning models and understand which elements of them are especially susceptible to attacks.
These are just a few of the steps you need to take to keep your machine learning models safe. For a more detailed understanding of the issue, read the full text of the article.
IT Business Edge, September 21, 2021
Machine learning has evolved significantly in recent years and is used in many applications. Its applications range from simple tasks such as filtering spam in mail to complex tasks such as face or speech recognition.
Unfortunately, machine learning models remain deeply exposed to a variety of threats to date. The article provides a brief overview of the main types of threats with their descriptions. Poisoning attacks are one of the most common threats. In this case, the attacker enters malicious data into the classifier, which leads to incorrect decisions in the future. Moreover, for a successful attack, it is necessary that the attacker has a certain degree of control over the classifiers. The difficulty of working with such attacks is that they are practically invisible to humans and it is extremely difficult to determine that the attack was carried out.
The article provides an example of one of the known public cases of a poisoning attack that occurred in 2016. Microsoft launched a Twitter chatbot called Tay that was supposed to communicate with users in a friendly manner. However, as a result of such an attack, the bot began to behave abusively, as a result, it had to be deactivated soon.
The author of the article also provides an overview of evasion attacks and gives general recommendations on how to protect and secure your smart system from a variety of threats.
Fed Scoop, September 22, 2021
The Air Force is committed to actively introducing a variety of machine learning mechanisms into the free world, and therefore the issue of safety is extremely vital.
According to Lt. Gen. Mary O’Brien, deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects operations, despite the fact that artificial intelligence is positioned as the key to more efficient work of the Air Force, at the moment in this area there are no mechanisms to ensure the safety of smart applications.
“With our airmen, once we do get the AI, what are we doing to defend the algorithm, to defend the training data, and to remove the uncertainty?” O’Brien commented. “Because if our adversary injects uncertainty into any part of that process, we’re kind of dead in the water on what we wanted the AI to do for us.”
We remind you that artificial intelligence is indispensable in all areas where it applies security, regardless of the tasks it is given.