Towards Trusted AI Week 10 – hacking risks for artificial intelligence, and others

Secure AI Weekly admin todayMarch 7, 2022 92

Background
share close

As smart systems become increasingly common in the realm of national security, risk issues require more attention


The benefits of peripheral vision for machines

 MIT News, March 2, 2022

There are a number of similarities between the way some computer vision systems process images and the way humans see out of the corner of their eye.

According to MIT research, a certain type of robust computer vision model perceives visual representations in the same way that humans do with peripheral vision. These models are designed to protect against contention and are designed to overcome minor bits of noise that have been added to image data by third parties in order to affect the operation of the entire system. It has been found that how these models learn to transform images is very similar to how human peripheral processing occurs. Unlike humans, smart systems do not have visual peripherals, as little work on computer vision models has been focused on peripheral processing. 

“It seems like peripheral vision, and the textural representations that are going on there, have been shown to be pretty useful for human vision. So, our thought was, OK, maybe there might be some uses in machines, too,” explained Anne Harrington, a graduate student in the Department of Electrical Engineering and Computer Science.

According to the study, the development of a machine learning model should include a form of peripheral processing in order to allow the model to automatically learn visual representations – then the model will be more robust to some subtle manipulation of image data. In addition, the study may also address peripheral processing targets in humans, which are still poorly understood.

Academic deepfake research paper suggests liveness detection vulnerable

Biometric Update, March 2, 2022

Deepfakes are a major threat to facial recognition and liveliness detection systems. A new study focused on the methods, effectiveness, and limitations of biometric deepfake detection.

According to Unite.AI, biometric survivability API testing has demonstrated that they are vulnerable to evolving deepfakes. ‘Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era’ report evaluated the effectiveness of the biometric services “Facial Liveness Verification” or “FLV” provided through the API. Many of them have been found to be configured to detect legacy deepfake attack methods. Another problem was the dependence on a specific architecture. 

The researchers presented a framework for deepfake attacks LiveBugger. It has been used to test liveness systems and it has been found that some of them are better able to detect high quality synthesized videos than low quality ones. Six deepfake frameworks were included in the development of LiveBugger, targeting four different attack vectors. This framework has been applied to liveness systems using single images, video clips, prompted actions, and prompted speech.

The researchers concluded that biometric liveliness detection systems should move away from the single-image approach in the future. It is also recommended to use deepfake detection for video clips, analysis of lip movements in speech processes with prompts, and coherence detection in action-based liveliness systems.

Hacking Poses Risks for Artificial Intelligence

AFCEA, March 1, 2022

As smart systems become increasingly common in the realm of national security, risk issues require more attention.

“There hasn’t been enough policymaker attention on the risks of AI being hacked. There are people pushing for adoption of AI without fully understanding the risks that they are going to have to accept along the way,” – commented Andrew Lohn, a senior fellow at the Center for Security and Emerging Technology. 

Artificial intelligence and machine learning systems, just like all other systems, can be easily attacked by hackers, however, since they can be attacked even without access to a computer network, this only increases the opportunities for attacks by third parties. Lohn and several other researchers wanted to draw attention to the growing body of academic research that demonstrates that AI/ML algorithms are susceptible to all sorts of attacks.

One of the main problems with this is that the attacks to date have gone beyond purely theoretical cases. Researchers around the world are demonstrating successful attacks on a variety of autonomous vehicle systems. For example, the AI-powered autonomous driving systems used in Tesla vehicles were successfully attacked, and subtle roadway stickers placed by Tencent forced the vehicle to use the autopilot function to switch lanes to the oncoming lane.

There are also data poisoning attacks. They work by adding specially crafted images to AI and ML training sets. Such kits may sometimes be pulled from the public internet or collected from social media or other platforms. In addition, poisoned images contain data that can teach AI or ML to misidentify entire categories of objects – and the problem with such images is that they are completely invisible to humans.

One way or another, there are a lot of internal attacks of the machine learning system and artificial intelligence, where they would not be involved. Read more in the article.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post