Towards Trusted AI Week 51 – Most shocking deepfakes of the year, and others

Secure AI Weekly admin todayDecember 27, 2021 31

Background
share close

Deepfakes are posing a real threat to the ethical side of AI


How can AI be made more secure and trustworthy?

Helpnet Security, December 20, 2021

Artificial intelligence is playing an increasingly important role in our daily life. Its use has become ubiquitous, and therefore safety issues are currently a burning issue.

There are several different ways to attack machine learning models, such as model evasion, model poisoning, and privacy attacks. Model evasion attacks rely on tricking the model by misclassifying certain inputs or bypassing anomaly detection mechanisms. This could be a modification of a malicious binary so that it is classified as safe. Model poisoning, on the other hand, aims to change the behavior of the model, as a result of which future patterns are misclassified. For example, it could be providing malicious input to a spam classification model to force it to misclassify emails. Finally, privacy attacks rely on replicating these models and / or exposing the data that was used to train these models (model inversion).

In the article, the author lists the most common methods of dealing with such attacks that exist today. In addition, the article mentions techniques to protect the privacy of the model, such as gradient masking, differential privacy, and cryptographic techniques. According to the author, recognizing the existence of threats, taking appropriate precautions, and building systems to detect malicious activity and inputs can help us overcome these challenges and make machine learning more secure.

Most Shocking Deepfake Videos Of 2021

Analytics India Mag, December 23, 2021

Tools such as FakeApp and DeepFaceLab have already made deepfake creation available to a wide range of people.

  To some, deepfakes may seem (and, apparently, it seems) amusing, but such use of smart technologies can carry irreparable risks to the reputation of people and other unpleasant consequences. Here are the top most scandalous deepfake videos of the past year.

Tom Cruise on TikTok: Tom Cruise appeared on the Chinese social network TikTok, where he was showing off his coin tricks and fooling around in the store – which was very unlikely for an actor. It soon became clear that the videos were deepfakes created by visual artist Chris Ume with the help of actor Miles Fisher.

Deep Nostalgia Video: The platform has generated a lot of buzz by allowing humans to use AI to transform their ancient family photos into short videos. With the help of applications, you can recreate old photos of your long-gone relatives. Deep Nostalgia is a product of the Israeli company D-ID, which brings still photos to life so that your long-gone relatives can blink, smile and tilt their heads. Good or not, ethical or not – decide for yourself.

Paul McCartney in video Find My Way: Young Paul McCartney will perform in the video for the song Find My Way using deepfake. The creators of the fake video are Hyperreal Digital, which specializes in creating hyper-realistic digital avatars.

Read about other semi-polar deepfake videos in the article at the link.

Sick sites where MILLIONS of unsuspecting women ‘deepfaked into porn’ exposed

The Sun, December 20, 2021

A number of experts have already said that deepfake technology is at a dangerous stage in its development today, and here are some of the proofs.

Ajder research working on a deepfake report for Sensity said that «the vast, vast majority of harm caused by deepfakes right now is a form of gendered digital violence.» 

The researchers added that one of their studies found that millions of women were victims of deepfake porn, and the total number of deepfakes on the Internet doubled approximately every six months. Today, many applications and websites make this technology easy to use and available to almost everyone. The actual problem is that it has now become difficult to distinguish a fake video from a real one.

For example, intimidating is the fact that there is already an app that promises users to “make deep fake porn in a second” in an ad that has since been removed, although the app description in the App Store or Google Play Store does not mention the use of pornography. Basically, this means that such an abusive application can avoid being removed from these platforms, despite fears that it could be used to inflict non-pecuniary damage.

Written by: admin

Rate it
Previous post