Towards trusted AI Week 23 – The way out of Clearview AI database

Secure AI Weekly admin todayJune 14, 2021 76 2

Background
share close

One of the main problems of AI is the lack of trust from users


Is there any way out of Clearview’s facial recognition database?

The Verge, June 9, 2021

Last year it became known that the Clearview AI company used millions of images from the Internet to create a facial recognition database. The company itself claims to have more than 3 billion photos available to any law enforcement agency that has a subscription. It is possible for me that photos of you or your friends also ended up in this database, since in the process of its creation images from such popular social networks as LinkedIn Instagram Facebook were used. 

By analyzing different sources, the company collected a huge amount of personal data about different people, this caused a negative reaction from society, and some social sites like Facebook sent requests to stop parsing their sites. The situation once again raises the question of the ethical use of information posted by users on the network. Despite the fact that this question has arisen for a long time, this situation illustrates it. Accordingly, the question now arises of how the information will be used, and more importantly, is there any way to remove information from the database.

Cybersecurity experts face a new challenge: AI capable of tricking them

Input, June 13, 2021

If you use such popular social networks as Facebook, then for sure you have come across misinformation warnings. However, things are not so scary when it comes to posts on social networks. Imagine a situation when it is incorrect Information will be found in important texts for example In scientific articles or medical researches. Misinformation can be a huge threat.

In order to study the magnitude of the threat of disinformation, researchers conducted an experiment with the spread of false information in the cybersecurity and medical community involving AI models called transformers to generate fake cybersecurity news and COVID-19 medical studies. In addition, false cybersecurity information was generated that could mislead industry experts.

CalypsoAI Launches VESPR, The Leading AI Security Tool Available for U.S. National Security Users

Cision, June 10, 2021

 VESPR, the first secure end-to-end AI security tool by the CalypsoAI startup It’s going to make a difference with AI distrust: while the application of artificial intelligence has always been surrounded by distrust and doubt on the part of society, everything can change. The tool, developed as a result of years of independent research on machine learning models, analyzes the safety, transparency, and explainability of ML models.

“VESPR is an exciting next step in bringing trusted and secure AI to our customers in the national security community and other highly regulated industries. CalypsoAI looks forward to continuing to provide innovative solutions, focused on ensuring that AI technology is deployed securely and transparently across enterprises,” commented Neil Serebryany, CEO of CalypsoAI.

 

Written by: admin

Rate it
Previous post