Towards Trusted AI Week 47 – combating facial recognition technology’s security problem

Secure AI Weekly admin todayNovember 30, 2021 54

Background
share close

Artificial intelligence has come a long way, but it needs to meet safety criteria


193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence

UN News, November 25, 2021

We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable Articificial Intellegence technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues”, UNESCO commented. 

Today, artificial intelligence is used in absolutely all areas of human activity – from social networking applications to medicine and autonomous transportation. According to UNESCO, AI is also involved in decision-making by governments and the private sector. In addition, he also helps in solving global problems such as climate change and world hunger.

The Recommendation on the Ethics of AI from UNESCO was the answer to questions about the ethics of the use of artificial intelligence – now countries will receive the first global regulatory framework, which will also make states responsible for its implementation at their level. UNESCO includes 193 Member States and will ask them to report regularly on their progress. The text highlights both the benefits and risks of AI, providing guidance on how to ensure the advancement of human rights and contribute to the achievement of the Sustainable Development Goals. This will focus on addressing the challenges of transparency, accountability and confidentiality, with action-oriented policy chapters on data management, education, culture, labor, health and economics. One of the main calls of text is data preservation; in addition, the recommendation also explicitly prohibits the use of artificial intelligence systems for social media scoring and mass surveillance.

Deepfake: What’s the Technology Behind It?

All3DP, November 23, 2021

Synthetic media are artificial production, manipulation or modification of images, video and audio using artificial intelligence, they are often used for deceiving actions and then they are called “deepfakes”.

 Despite the fact that they are often used for the purpose of entertainment and. In order to bring realism into films or programs for interaction with users, problems begin when tachology is used in knowingly false information, for example, in false political videos, fake videos and fake porn videos.

The article examines the major milestones in the development of this technology that served as the basis for deepfakes, from the late nineties to the present and sheds light on the current state of the technology along with its implementation in some of the popular applications, such as Wombo, Avatarify, FaceApp and others. In addition, the article examines in detail the types of modern deepfakes, discusses issues of problems that arise as a result of the illegal use of deepfakes. One way or another, this is a new and actively developing technology, and its development is far from complete.

Rise of info-stealers, crypto scams and deepfakes will imperil financial sector

SC Media, November 25, 2021

According to the fears of experts, the financial sector in the coming year will be subject to a number of difficulties that it has not yet faced, and the previously existing problems can only intensify. 

In their threat forecast, the published Kaspersky Lab underscored that next year the number of data thefts is very likely to increase. In addition, some experts note that the use of deepfakes for attacks on social engineering will greatly increase, as the new technology is gaining more and more popularity lately.

 In the past, it has become clear that attacks in which an attacker uses deepfakes to impersonate department heads or other influencers with personal goals are very successful. Unfortunately, this trend is expected to continue in the future. 

“Cybercriminals, they’re very smart. They’re always improving their techniques. And I agree deepfakes can be used [for targeted] attacks against companies,” said Anchises Moraes, global cybersecurity evangelist at C6 Bank.

Companies have to combat facial recognition technology’s security problem

SC Media, November 25, 2021

The technology of face recognition is gaining more and more popularity today, while questions about its safety arise.

In particular, the personal information of people comes under attack, which is more and more often made available to the public, which creates new opportunities for attackers. Either way, in a negative scenario, attacks affecting facial recognition technologies can affect the entire organization and therefore security leaders must understand these risks and take measures to mitigate them. 

Facial recognition technologies have penetrated the life of ordinary citizens – for example, with the help of our mobile phones. We voluntarily post a lot of personal information on the network, and some of the photos are created without our knowledge – our faces are filmed and broadcast to the public, as a result of which third parties can identify us by images that we do not even suspect exist.

All of this puts us and personal information at risk. Unfortunately, a number of cases in practice and research have proven that the risks are real, and that we should seriously address the security issues of these systems today.

Written by: admin

Rate it
Previous post