Towards Trusted AI Week 3 – Robots can be fooled, but they get smarter, and others

Secure AI Weekly admin todayJanuary 17, 2022 72

Background
share close

Knowledge about artificial intelligence and its security needs to be constantly improved


Fact check: How do I spot a deep fake?

DW, January 14, 2022

In the heyday of deepfakes, it is extremely difficult to distinguish genuine videos from fake ones using only the senses. You should not worry, but you must always be on the alert.

In fact, the task is not as difficult as it may seem at first glance, however, it is not necessary to remember a few general signs. For example, one of the recommendations for checking the authenticity of a video is simply to pay attention to the source where you found the video. Can he be trusted? Did you find this video on other trusted resources? If you have suspicions about the authenticity of the material, describe what is happening in the video in any search engine and try to find an alternative version of the file. In addition, another factor that should arouse suspicion is the excessive regularity and symmetry of faces, figures, jewelry and other details in the video, since usually nothing in life is extremely symmetrical.

These are not all recommendations, however, they can help. Read more about the definition of fake videos in the article at the link.

Even robots can be fooled, but thery’re getting smarter

Syfy Wire, January 16, 2022

It is generally accepted that AI cannot be wrong. People think that smart systems are programmed to be perfect, unlike people – living organisms created by nature.

In practice, things don’t turn out quite right. We should not forget that it was we who created the robots first of all. Even small disturbances in smart systems can do much more harm than it seems at first glance – and in some areas, such as medicine or autonomous vehicles, such errors in smart systems can be fatal.

Despite a number of myths in the minds of people, the robot is still far from perfect. For example, according to some studies, robots have a number of serious problems with auditory perception, which in machines is not as accurate as in humans.

“[AI] models can be fooled by adversarial examples, which are inputs intentionally perturbed to produce a wrong prediction without the changes being noticeable to humans,” commented study authors Jon Vadillo and Roberto Santana.

Perturbations that can lead to serious crashes can be individual or generic, and there are also separate types designed for only one specific type of input, for example, for the word “yes”.

At the moment, the topic of perturbaions is being actively explored, as perturbations can have a tremendous impact on systems.

What happens if a hacker manages to mess up an artificial intelligence algorithm

Federal News Network, January 11, 2022

Artificial intelligence is the greatest invention, however, it also has a weak point – software that can be hacked. This caused concern to the Defense Advanced Research Projects Agency and therefore DARPA developed a program to develop protection.

On this occasion, Dr. Bruce Draper had a chat on the Federal Drive with Tom Temin. The full version of the podcast is available via the link. Despite all the work done over the past ten years, machine learning systems are still very easy to fool with very small changes – just a few pixels can lead to a fundamentally false result.

As part of an attack on a machine learning system, the program does not have to be online at all – it can be attacked through its inputs and outputs. This fact greatly complicates the system of protection against attacks, and therefore the main task at the moment for researchers is to come up with systems that will not be deceived by any minor changes.

The GARD program contains tools for assessing how vulnerable a particular system is, while the tools are publicly available, open source.This work combined the already publicly available developments on the security of smart systems, but it expanded the range of security issues discussed – for example, the use of stickers for attacks or poisoning attacks.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post