Towards trusted AI Week 18 – misuse of deepfakes is not far away

Secure AI Weekly admin todayMay 10, 2021 26 5

Background
share close

Artificial intelligence is a powerful technology that can be used for good or bad


Deepfake attacks are about to surge, experts warn

Threatpost, May 3, 2021

Experts warn that cybercriminals are increasingly using deepfakes. Although this technology has been used primarily on porn material, its use has expanded to a large extent. A new report from Recorded Future also predicts an increase in deepfake offerings on the dark web, which is the first sign of a wave of fraud using this technology. 

Last summer at the Black Hat USA 2020 event, FireEye warned the audience about the wide availability of open source deepfake tools. This problem has been raised earlier. Back in 2019, deepfake artist Hao Li warned that deepfake in the hands of intruders could turn into a dangerous weapon.The issue actually requires the close attention of specialists, since now it is extremely important to do everything to get ahead of the attackers. 

AI in MedTech: risks and opportunities of innovative technologies in medical applications

MedTechIntelligence, May 7, 2021

Manufacturers are increasingly adopting artificial intelligence technologies in therapeutic and diagnostic applications of medical devices. Despite the application in a critical area for human life, at the moment there are no clear criteria and production standards for such devices. There are indeed risks that may arise when introducing artificial intelligence into medical practice. For example, it can be system errors, training data manipulations, confidential information leaks, and so on.

Although there are already a number of regulations in the field, such as the EU Medical Device Regulation (MDR) and the EU In Vitro Diagnostic Medical Device Regulation (IVDR), further regulation of the use of medical devices is extremely important.

How to stop AI from recognizing your face in selfies

MIT Technology Review, May 5, 2021

Now that facial recognition technologies have become especially advanced, all photos that are in the public domain can be used for fraudulent purposes. Researchers are trying to counter this and are thinking about ways to make photographs unrecognizable to artificial intelligence. Some of these methods will be presented shortly at the ICLR AI conference.

For example, a technology such as data poisoning is not new at all and can be used by a person to prevent the spread of data. In addition, many are capable of being based on adversarial inputs, that is, small changes imperceptible to the human eye, but affecting the ability of the system to recognize photographs.

One way or another, it must be remembered that artificial intelligence is a technology with enormous potential, which can be of both benefit and harm, so it is extremely important to understand how you can protect yourself from malicious use.

Written by: admin

Rate it
Previous post