Towards Trusted AI Week 9 – Loosing a company due to an algorithm mistake, and others

Secure AI Weekly admin todayFebruary 28, 2022 66

Background
share close

ML models not as secure as it might seem at first glance


People Trust Deepfake Faces More Than Real Faces

Vice, February 22, 2022

Faces created by artificial intelligence become so believable that they blur the line between real photos and fakes, even people sometimes find it difficult to distinguish one from the other

Recently, a study by researchers Sophie J. Nightingale at Lancaster University in the UK and Hany Farid at the University of California, Berkeley was published, according to which people are very bad at distinguishing artificially generated images from real photographs.

The researchers conducted three experiments to determine how easy it is for people to tell deepfakes from real photos. In the first test, 315 participants, without prior preparation, tried to determine whether they were real or generated by looking at the photo. People were able to correctly identify the photo 48.2 percent of the time. In the second experiment, 219 other participants received comments as they passed the test; based on this, people more easily understood the parameters that are not characteristic of artificially generated images. This improved the statistics to 59 percent.

According to the study, it became clear that artificially generated images provide many new opportunities for scammers, since artificial photos do not arouse suspicion in many people.

Techniques to fool AI with hidden triggers are outpacing defenses – study

The Register, February 25, 2022

The fact that the use of deep neural networks for computer vision tasks such as face recognition, medical imaging, object detection and autonomous driving is becoming more common may attract the attention of cybercriminals.

The application of DNNs is expected to grow rapidly in the coming years. Analysts at Emergen Research report that the global market for DNN technology will grow from $1.26 billion in 2019 to $5.98 billion by 2027, especially in industries such as healthcare, banking, financial services and insurance. Such fast-spreading attractions always attract the attention of visitors. They, in turn, can attack the learning process of AI models to inject hidden features or triggers into DNNs. It turns out a kind of Trojan horse for machine learning. Such a Trojan can change the behavior of the attackers to the desired one, which can have bad consequences. For example, at the request of the intruder, the system can quickly develop in people or objects. This, in turn, can lead to serious consequences – for example, in the case of autonomous cars that read traffic signs.

In their study, the experts propose two new defenses – variational input filtering (VIF) and adversarial input filtering (AIF). Both methods are designed to learn a filter that can detect all Trojan filters in the model’s input. The methods were applied to images and their classification.

Startup founder says he lost his company and $100 million by relying on Facebook: ‘Sends chills down my spine’ to watch others build businesses on Instagram and TikTok

Business Insider, February 25, 2022

The founder, whose company went bankrupt due to a Facebook algorithm tweak in 2018, commented on the Twitter incident in more detail this week.

Joe Spieser started LittleThings.com in 2014 as a women-centric digital media site. It focused on inspirational content such as animal videos, recipes, and other uplifting stories. The project was doing well until the fatal algorithm change. The fact is that the project’s 20 million social media followers were largely created from Facebook’s huge user base. On top of that, Facebook even used LittleThings as an example of how to build a successful media company at one of their annual conferences.

The problems started when traffic to the company’s pages was limited after Facebook changed its algorithm to promote posts that it thought people would interact with the most. For example, it could be messages from friends and family that the company believes could keep users on the platform longer. However, the algorithm has also begun promoting violent, false, and divisive content.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post