Towards trusted AI Week 50 – AI opens new doors

Secure AI Weekly admin todayDecember 13, 2020 30

Background
share close

AI can bring relief in many spheres, but before that, smart technologies need sufficient testing


 AI gives new opportunities   

InformationWeek, December 10, 2020

Gunter Ollmann, chief security officer for Microsoft’s Cloud and AI security division, discussed the 2020’s rush to the cloud, the current situation with adversarial AI, and the ways leading companies can stop cybercriminals. During the year, when many companies switched to remote work mode, the level of security still increased. Among the other aspects, Ollmann paid attention to the effect AI had on security in general. For example, artificial intelligence is helping companies move towards automation in the fight against cyber attacks. Equally with this, the use of AI opens up a whole range of opportunities for hackers to attack the system, making their way through the vulnerabilities of artificial intelligence. The next step on the road to securing organizations must be keeping smart systems secure.  

“There’s a lot of work going on in the adversarial machine learning space,” Ollmann commented.

Surveillance in AI lacks real-life tests

VentureBeat, December 8, 2020

The field of surveillance actively uses artificial intelligence technologies, in particular, facial recognition and face detection. It is easy to guess that, on a par with this, today there are already a number of adversarial attacks that can deceive smart systems. Still, the researchers from Microsoft, Harvard, Swarthmore, and Citizen Lab claim that the majority of available attacks  are not really applicable in the real world. 

Nevertheless, when researching attacks, it is necessary to carry out a large number of experiments in different conditions, otherwise systems that behave unpredictably in new conditions can cause real harm. 

“Increased testing of adversarial machine learning techniques, especially with groups from diverse backgrounds, will increase knowledge about the effectiveness of these techniques across populations. This could potentially lead to improved understanding and effectiveness of adversarial machine learning attacks,” say specialists in their new paper

According to the researchers, some of the works that  they reviewed in terms of their study, the authors didn’t test their attacks in real life consistently. One of the main problems of the reviewed papers was containing “noticeably” small sample sizes, summing up to just one or two people in the majority of cases, sometimes authors even tested their attacks on themselves or their collegues.  

AI helps banking sector to stand against fraudsters 

RTInsights, December 8, 2020

The banking sector has always been a cherished target of cybercriminals, and fraud and financial attacks have always been very popular in this industry. Attacks on organizations in this area are not going to decrease, which means that the sphere needs new ways to defend against future attacks. Among other things, there are high hopes for artificial intelligence and machine learning technologies, as they can greatly facilitate the process of detecting attacks. For example, AI can learn patterns of habitual system behavior and inform specialists about suspicious activity within the system. 

Speaking of the application of artificial intelligence technologies specifically in the financial sector, smart applications track suspicious purchases by bank card or shadow transfers, the system will also be able to send an alert if it notices an unnatural account balance or spending records. 

Bill Harrod, Vice President of Public Sector at Ivanti emphasised that password-related cyberattacks remain a very common type of attacks. Here AI can help dealing with them a lot as implemented as a new form of authentication. 

Written by: admin

Rate it
Previous post