Towards trusted AI Week 48 – hackers’ benefits from AI

Secure AI Weekly admin todayNovember 29, 2020 122

Background
share close

Ensuring AI security is not an easy thing, and a lot is to be done in the future.


Making attacks easier with the help of AI

TechHQ, November 23, 2020

The rise of machine learning and artificial intelligence technologies has resulted in more than just a number of useful innovations: hackers have begun to actively use new technologies as well. According to Forrester’s Using AI for Evil report, “mainstream artificial intelligence (AI)-powered hacking is just a matter of time”. 

No professional field is immune to cyberattacks, and with the spread of artificial intelligence, it becomes even easier to carry them out. AI helps automate popular cyberattack methods such as phishing, allowing them to be initiated at high speed. Another example is AI-powered malware, which now can spread more easily through a company’s system. Such malware can examine network traffic and blend its communications into the other ones on the network.

Manipulating fake news gains momentum

EnterpriseTalk, November 25, 2020

Nowadays, the Internet is full of information, but not all can be trusted and It is becoming increasingly difficult for users to understand if the information they find on the web is true. Influential social media networks, such as Facebook and Twitter, has already started to imploy fake news detectors and added warning tags to posts, which can be put by users themselves if they find some article false or misleading. At the same time we can see growing concerns around these very detectors being manipulated with the help of user comments as they can flag fake content as genuine and vice versa. Researchers from the Penn State’s College of Information Sciences and Technology have found that such actions can be performed by attackers in order to influence the rating of an article. However, the hacker does not even have to be the original author of the article. 

Malefactors can use fake social media accounts to fool a smart system. The researchers concluded that people were hardly able to distinguish fake comments posted by hackers from those created by real users.

Discussing difficulties of providing AI security 

Semiconductor Engineering, November 25, 2020

Five experts discussed security risks across various market segments. Besides other questions, professionals touched why providing security of AI systems is difficult and what is to be done to achieve that. Among the issues, there is one with transparency as there is a little transparency in many AI algorithms, which creates certain security risks. According to the experts, active defence is important here, and finally it may turn out that we will create one AI to protect another AI. 

“There are several DARPA programs on the agenda now because of attention to the growth and explosion of AI research and its use in the DoD. If an AI is a black box — especially if it is used in a safety-critical or mission-critical setting, it’s not acceptable if it just gives you a promise and says, ‘Trust me.’ You don’t trust.” commented Joe Kiniry, principal scientist at Galois, an R&D center for national security.  

Several research programs are being developed right now based on explainable AI, in terms of which it becomes possible to create various AI tools that provide explanations for their answers. 

According to Helena Handschuh, security technologies fellow at Rambus, “security at some point in the future may start reaching that level, as well, when we have so many lines of code or hardware equivalence that we can’t possibly figure it out by ourselves anymore. We will need tools to do that.”

Written by: admin

Rate it
Previous post