Towards trusted AI Week 6 – AI that we can trust

Secure AI Weekly admin todayFebruary 15, 2021 57

Background
share close

How much do you trust AI to make important decisions for you?


In AI (can) we trust?

Forbes, February 9, 2021

Without trust, no interaction can be productive. The interaction between humans and smart technologies is no exception. We trust the recommendations of artificial intelligence in different areas of our life, but there is a prediction that the degree of our trust also depends on the area in which we have to make a decision. According to research, at least a quarter of all executives surveyed have interfered with decisions made by artificial intelligence at least once, and about three-quarters of them agreed that artificial intelligence decisions needed oversight.

At the moment, it is obvious that consumer confidence in AI recommendations is higher when it comes to choosing a utilitarian product, when certain strictly technical parameters of goods are compared. However, experiments show that this is not the only thing that AI can be good at: AI recommendations for any other issue can be useful if smart technologies do not replace the original human choice, but support it with new similar recommendations.

In addition, when choosing artificial intelligence to solve some work issues, people forget that in some cases smart technologies are not needed, and some decisions can be easily made by a person using simple analysis.

One way or another, when answering the question about trust in artificial intelligence, it is important to note that adequate and balanced interaction between humans and AI is important in this case. Neither one nor the other is better or worse, but they can reinforce each other.

Artificial intelligence has yet to break the trust barrier

Forbes, January 12, 2021

AI technology has become an indispensable human companion: it not only makes our lives easier, but also helps people in making some decisions, and quite serious ones. But of course, before we allow smart technologies to make really serious decisions for us, we need to understand to what extent we can really trust them.

The ethics of smart technologies is primarily based on the ethics of their creators. We, people, exist in different socio-cultural conditions, which also affects our perception of moral norms – what norms should smart technologies be guided by in this case?

  The question of the biases of artificial intelligence likewise comes down to the biases of people themselves. These very biases are initially in the people who create the AI ​​and, accordingly, the data with which the AI ​​interacts.

In addition, there is the concept of “garbage in, garbage out”, according to which the quality of the input directly affects the quality of the output – this is directly applicable to AI.

Ultimately, biases are inevitable and natural based on the general principles of AI. You should not perceive them as an error solely of the technology itself, since they appear along with the data and principles of work that the person himself lays down to an intelligent system. Therefore, it is imperative to monitor the operation of systems directly at the development stage in order to minimize biases.

Computer scientists create fake videos that fool state-of-the-art deepfake detectors

SciTechDaily, February 9, 2021

Deepfake technology has become a real scourge of our time, especially in connection with questions about its  unethical use. In this regard, new systems have appeared to help recognize deepfakes. However, based on recent research, experts have been able to find a way to deceive these systems as well. The research was first demonstrated at the WACV 2021 online conference held in January, 2021.

Deepfake detectors were tricked by injecting adversarial examples into video frames: an adversarial example should be created for all the faces in a video frame. The new technology can also withstand the process of compressing and resizing the video, which usually helped define deepfake videos. Researchers have managed to ensure that the adversarial image worked even after being compressed and decompressed.

“Our work shows that attacks on deepfake detectors could be a real-world threat. More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector,” commented Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper.

Written by: admin

Rate it
Previous post