Towards Trusted AI Week 19 – Stocks devalue with adversarial retweets, and others

Secure AI Weekly admin todayMay 10, 2022 101

Background
share close

AI may be much more powerful than you think 


Applications for artificial intelligence in Department of Defense cyber missions

Microsoft, May 3, 2022

The Microsoft Chief Scientist shares his views on four key areas at the intersection of AI and cybersecurity that require special attention at the moment, including AI in cybersecurity, AI in cyberattacks, AI vulnerabilities, and AI for malign information operations.

For us, the most interesting part is the review on the vulnerability of AI. According to the author, the growing popularity of AI appropriately aggravates the situation with increased attempts to attack AI systems and their components. The problem is that the security of the AI ​​systems themselves is often overlooked as the targets for potential new attacks grow and become more diverse. Attacks on AI systems can look traditional – based on vulnerabilities, but can also be carried out using a new, worrying category – malicious AI.

Separately, the author reflects on the topic of attacks on the supply chain of AI systems. And of course, much attention is paid to the already well-known adversarial machine learning methods. Read more about the topics mentioned in the article at the link.

 

Devaluing Stocks With Adversarially Crafted Retweets

Unite.ai, May 4, 2022

In one of his articles, our founder has once covered the topic of potential stock market crash caused by a cyberattack. And here’s another discussion on this topic: experts from US universities and IBM presented the concept of an adversarial attack – according to experts, it can cause losses in the stock market, and for this it is enough just to change one word in a retweet of a message on Twitter.

The attack on stock forecasting systems with machine learning is based on the fact that such systems often make their findings based on organic social networks, they consider them as a predictor of performance. That is why the influence on such input data can significantly affect the result of the analysis and forecast.

Among the sites and networks from which the systems collect information are Twitter, Reddit, StockTwits, Yahoo News, and others. A feature of Twitter is the fact that tweets can be edited. On Reddit, you can only make additional posts, comment or rate.

It is noteworthy that in the framework of the study, in an experiment with the Stocknet forecasting model, it was possible to cause a noticeable decrease in the predicted value of shares by two methods. The most effective was the manipulation attack with edited retweets. At the same time, as a result of the attack, the most serious falls occurred.

‘This work demonstrates that our adversarial attack method consistently fools various financial forecast models even with physical constraints that the raw tweet can not be modified. Adding a retweet with only one word replaced, the attack can cause 32% additional loss to our simulated investment portfolio,” commented the researchers.

 

New Method Detects Deep Fakes With 99% Accuracy

Unite.ai, May 3, 2022

Experts from the University of California at Riverside have presented a new method for detecting manipulated facial expressions in fake videos with up to 99% accuracy.

The method showed high efficiency similar to methods based on changes in facial features. Presumably, the new technique will be highly effective in detecting any type of facial manipulation.  The development of such methods is extremely important, since they are now often used in various internal and international conflicts around the world: it was extremely problematic to identify faces only with exchanged expressions.

“What makes the deep fake research area more challenging is the competition between the creation and detection and prevention of deep fakes which will become increasingly fierce in the future. With more advances in generative models, deepfakes will be easier to synthesize and harder to distinguish from real,” commented Amit Roy-Chowdhury, Bourns College of Engineering professor of electrical and computer engineering.

In the course of the work, experiments were carried out on two complex datasets of facial manipulations. EMD has been found to work better with facial expression manipulation and identity spoofing.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post