Towards Trusted AI Week 27 – AI-based recommendations easy to abuse

Secure AI Weekly admin todayJuly 12, 2021 85

Background
share close

AI serves not only for good. Adversaries can use it and advance their attacks.


AI-based recommendations easy to abuse, warn researchers

F-secure, June 24, 2021

We see recommendation solutions based on artificial intelligence almost everywhere: in social media and various services and fields such as music, movies, photo stocks, online shopping, etc. 

In view of social networks’ global implementation and popularity, there are huge potential risks in relying on social media as a source of news. Their susceptibility is under great threat. After experiments conducted by researcher Andy Patel, it becomes obvious that some manipulation techniques may influence AI-based recommendations of social networks.

Andy Patel used data from Twitter to train and retrain collaborative filtering models, which were used to encode “user-content” similarities based on interactions. He tested these models when poisoning the data. Andy used datasets containing additional retweets between selected accounts. It was enough to apply a small number of retweets to manipulate the recommendation system into promoting accounts whose content was shared through the injected retweets. The researcher has published a detailed report on the experiments on GitHub.

Twitter is believed to be dealing with these attacks in the real world as well as many other common services.

“We performed tests against simplified models to learn more about how the real attacks might actually work. I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organizations to be certain this is what’s happening because they’ll only see the result, not how it works,” Patel commented.

Attackers use ‘offensive AI’ to create deepfakes for phishing campaigns

VentureBeat, July 2, 2021

AI serves not only for good. Adversaries can use it and advance their attacks.

Researchers have explored the threat of AI to organizations. Their survey identifies different capabilities that attackers can use to encourage their campaigns and includes responses from organizations like IBM, Airbus, Airbus, IBM, and Huawei. It discovers three key profits for an adversary to use AI: coverage, speed, and success.

AI helps attackers break training data while poisoning ML models, catch usernames and passwords via side channel analysis or aid in vulnerability detection, pentesting, etc. Among the most concerning offensive AI technologies, organizations name:

  • exploit development;
  • social engineering; 
  • information gathering; 
  • deepfakes used for impersonation. 

In our realities, where adversaries keep one step ahead of defenders, offensive AI can be used to increase likelihood of a successful attack. The researchers expect that deepfakes and phishing campaigns will become rampant due to bots’ convincing calls. In the near future, there may be an increase of offensive AI in data collection, model development, training, and evaluation.

It is recommended that organizations keep an eye on developing post-processing tools that can protect software from analysis after development. 

Faces are the next target for fraudsters

The Wall Street Journal, July 7, 2021

Face recognition is widely implemented in our daily or professional life from payment methods to criminal investigations or collecting a user’s data. These systems are considered an automated way to detect everyone you need. Like any other system, it is not perfect in terms of security and reliability.

People regularly attempt to fool face recognition. For that, they wear special masks, pictures, patches with deepfakes generated by AI, or even makeup. To crown it all, fraudsters also have all chances to bypass facial recognition technologies for their own purposes.

For years, researchers have warned about vulnerabilities, but recent findings have confirmed the concerns. It is high time to improve the systems.

Recently, the Adversa team warned about these risks in its Trusted AI research.

Written by: admin

Rate it
Previous post