Towards trusted AI Week 12 – Adversarial training in robots is contraversal

Secure AI Weekly admin todayMarch 29, 2021 51

Background
share close

Before using any technology, you must make sure that it is safe and secure


Adversarial training reduces safety of neural networks in robots

TechTalks, March 22, 2021

Robots are now increasingly used in a variety of work environments such as warehouses and other manufacturing facilities, especially during a pandemic. Deep learning algorithms and sensor technology make these robots more versatile and at the same time expensive.

Nonetheless, safety and security remain key issues in their use to this day. 

According to the researchers at the Institute of Science and Technology Austria, the Massachusetts Institute of Technology, and Technische Universitat Wien, Austria, the existing methods for solving these problems at the moment give mixed results. On the one hand, models should be trained on a variety of real-world practical examples. At the same time, models need to train on adversarial examples in order to be able to resist attackers at the right time.

However, this is the main problem. In their paper “Adversarial Training is Not Ready for Robot Learning,” the researchers note that learning from adversarial examples has a significant negative impact on the safety of robots.This in turn raises the question that deep neural networks must be trained in some other way that does not negatively impact the safety, accuracy and performance of robots.

Artificial Intelligence: Your friend in the fight against cyberattacks

The Times of India, March 24, 2021

Cybercriminals are never the same: they change like all the world’s technologies that surround them. Artificial intelligence capable of causing significant damage in the hands of a skilled lawbreaker has become one of the main weapons of a modern cybercriminal.

Adversarial AI is gaining tremendous influence by evolving and taking on new forms and methods. For example, if used correctly, it can affect the performance of one device or an entire group of devices using AI. Malicious software can hide inside a harmless application. In addition, there are malicious AI-based bots.The main problem with AI-based malware is that it can act faster than classic security systems can detect.

However, there are a number of ways to deal with this. At the moment, ways to combat cybercriminals are also constantly becoming more complex and becoming more effective and fast. 

For example, artificial intelligence can be used for good in this matter in many ways. AI can help analysts gather and analyze information to design new security systems.This is not all. Companies can use sophisticated AI to create models capable of combating Adversarial AI and AI-powered attacks. AI can be used in detection systems, significantly accelerating the process of detecting threats. This will help, in turn, to simplify the search for threats and reduce the time to eliminate them. So, even though AI is currently widely used by attackers, it can also be used to your advantage. However, this will be a long process that requires methodical and long work.

Written by: admin

Rate it
Previous post