Sometimes artificial intelligence should be significantly upgraded
TechXplore, April 27, 2022
In the event that Illinois residents have appeared in photos in the Google Photos app within the past seven years, they are eligible for a reduction in the $100 million class action lawsuit reached by Google this month.
Google’s tool in the Google Photos app has been reported to violate the Illinois biometric privacy law because, according to the law, companies must have user consent to use such technologies.
The settlement was filed in the District Court of Cook County, after which a pre-approval order was issued. At the same time, Google did not acknowledge its violations. If a final decision is reached on the case, then the residents of Illinois, who were affected by the problem, will be able to take part in the deal – each victim will receive from 200 to 400 dollars.
“We’re pleased to resolve this matter relating to specific laws in Illinois, and we remain committed to building easy-to-use controls for our users,” Google spokesperson José Castañeda commented.
Bloomberg, April 25, 2022
Three weeks ago our founder wrote an article on the risks of Poisoning and backdooring AI in the Forbes column, and in 2 weeks we see the same concerns are mentioned in Bloomberg article.
In recent years, artificial intelligence has been involved in almost all areas of human activity, and this has become the reason that attacks on AI have become more widespread and complex. Despite the fact that methods of dealing with intruders were being developed at that time, there are a number of nuances that can jeopardize security.
This serious danger lies in data poisoning – in this case, the attacker manipulates the information on which systems are trained. This method allows third parties to bypass AI-based protection, a threat many organizations are not yet prepared for. If systems have a lot of data, they can be taught to classify information. For example, the system does not have to be previously familiar with a particular object in order to recognize it if it has already encountered objects of this class. In cybersecurity, in order to detect malware, the data loaded into the system allows machines to learn to identify attacks on their own. Neural networks work more complex – they look at the training data and act on the basis of both known and new information. The network does not need a specific piece of malicious code to conclude that it is dangerous.
However, such networks are not invulnerable. A successful attack will take place if the hacker marks the malicious samples used as good and adds them to the large data packet that the network uses – when the oral system believes that the fragment proposed by the haслук is harmless.
In practice, the problem is not underestimated by the industry, and cybersecurity companies have to take responsibility. For more information on how to deal with risks, read the article at the link.
Analytics India Mag, April 30, 2022
Data privacy and security is a critical issue for all AI. The article highlights the ML privacy meter to demonstrate data privacy attacks and analysis. ML Privacy Meter is used to reduce the severity of privacy attacks on machine learning models and improve data privacy.
A significant part of the article considers the various existing threats to machine learning and different types of attacks, considers their features. Then the article actually considers the meter itself. First you need to install the ML Privacy Meter GitHub repository. Once installed and configured, download the dataset and train your learner – after that, you can start assessing the privacy risk of the model. You need to run a data processor that processes data for attacks that need to be performed on machine learning. It is possible to work with both box attack and black box attack using the “attack” attribute of ml_privacy_meter.
It can attack the student at different stages and levels, with the difference between black box attacks and white box attacks being explained above. To generate the last report, you can use “test_attack”, also in the article there are graphs that can be generated.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.