Towards trusted AI Week 13 – tax fraud affects Chinese facial recognition

Secure AI Weekly admin todayApril 5, 2021 38

Background
share close

Concern about the security of personal data must be supported at the state level


Chinese government-run facial recognition system hacked by tax fraudsters

South China Morning Post, March 31, 2021

Face recognition technology has gained enormous distribution in China at the moment. The technology is used literally everywhere from mobile applications to ATMs. It is logical to expect that such a popular technology will quickly attract the attention of attackers. And so it happened.

For example, not so long ago, a group of tax fraudsters tried to hack a governmental identity verification system in order to fake tax invoices. This fraudulent campaign was carried out in order to obtain large money summs. The fake invoices amounted to US$76.2 million. 

The case demonstrates that clandestine trade in personal information is a perennial issue, while China continues to actively tackle data privacy issues. In addition, there are even special image processing applications that are used to hack cameras. They are freely available and can be purchased at a relatively low price.

Unfortunately, the case once again demonstrates that in China at the moment the issue of confidentiality of biometric data is very acute. Despite the fact that the population of China reaches 1.4 billion and the majority of residents have access to the Internet, the level of protection of their personal data remains extremely low.

A South Korean chatbot shows just how sloppy tech companies can be with user data

Slate, April 2, 2021

A “Science of Love” application is able to analyze the text conversations of users and assess the level of the romantic component. This is possible based on the use of emoticons, phrases, average response time, and so on. Just recently, its parent company ScatterLab demonstrated a conversational AI chatbot Lee-Luda. According to the developers, the bot has been trained in more than 10 billion such dialogues.

The main issue is that the training took place using real correspondences of living people. As it turned out, the dialogues partially included critical personal data and confidential information.The problem arose at that moment when the chatbot began to give out names, addresses and other personal data in its responses. Although the developers stated that no personal data was used in training the chatbot, other experts stated that Lee-Luda could not simply generate addresses, names, and other data.Thus, it became known that personal information that was used during training can be asked from the chatbot.

In addition, it came to light that Scatter Lab previously uploaded a 1,700-sentence training kit to Github, also containing personal information. As a result, the information of more than 20 people was revealed, their relationship status and where they were. The case exposes the importance of the issue of confidentiality of data after its collection for further processing with the help of artificial intelligence and the lack of confidentiality in this matter.

 

Written by: admin

Rate it
Previous post