Towards Trusted AI Week 22 – student accused of cheating by an algorithm and others

Secure AI Weekly admin todayMay 31, 2022 237

Background
share close

No matter how smart the systems are – they still need our control  


Machine learning has a backdoor problem

TechTalks, May 23, 2022

The security of machine learning continues to gain relevance as their application increases. A new study by researchers at UC Berkeley, MIT looked at the security risks associated with delegating machine learning training and model development to third parties and service providers.

Many organizations outsource their machine learning work to pre-trained models or online machine learning services, which in turn can become a source of attacks on the applications that use them. Therefore, the researchers presented two methods for introducing undetected backdoors into machine learning models. They can be used to trigger malicious behavior.

Experts define undetectable backdoors as “computationally indistinguishable” from a normally trained model – for any random input, a malicious and benign machine learning model will have the same performance. The backdoor should only be launched purposefully, and only an attacker with knowledge of a special secret should be able to activate it. At the same time, any person will be able to turn any data input into a malicious one by making minimal changes to the input data.

Read more about the new research in the article at the link.

Accused of Cheating by an Algorithm, and a Professor She Had Never Me

NY Times, May 27, 2022

Recently, a Florida girl received a frustrating report – a startup called Honorlock concluded that a college student was cheating during an exam in February. 

What happened was more complex than just AI algorithm errors, as several people, academic bureaucracy, and Amazon’s automated facial recognition tool called Rekognition were involved in the situation. The situation involved extensive data collection and recording of a 17-year-old girl and her screen while taking a test, but the student was charged with cheating in the end by a person. As a result, everyone was left wondering if the fact that the girl looked away from the screen was cheating?

 During the need to pass a variety of tests and checks remotely, companies have developed web browser extensions that collect data on keystrokes and cursor movements, sounds from a computer microphone and record the screen and broadcast from a computer camera – based on this information, one can judge the honesty of passing the test. Honorlock was one such company. Despite the technology’s popularity, crashes and false allegations of scams have also been repeatedly reported by people.

The fact is that testing was remotely supervised by a teacher who also received a notification from the company with a comment that the girl often took her eyes off the screen during testing. In addition, the note was to be handed over to the dean, on whom the further fate of the student depended.

Despite the fact that at a meeting with the dean and doctor via video link, the girl explained that she was looking down to concentrate and fiddling with her hands, they recognized the student “responsible” for “failure to follow instructions”. This resulted in zero on the exam and a warning on her record.

Who exactly to blame for this situation – the algorithms, the teachers, the girl, the dean – remains unclear, as well as whether it is possible to judge cheating by looking away.

Facial recognition company Clearview AI fined £7.5m for illegally using images of Brits scraped from online

SkyNews, May 23, 2022

Clearview AI  has been making headlines for the last year – this time the organization is accused of illegally extracting billions of images of people’s faces from the Internet without their knowledge and permission. In addition, the company was reproached for using these photos to form a global face recognition database.

According to the Information Commissioner’s Office (ICO), the company illegally collected more than 20 billion images of people’s faces – this was supposedly done to create a global online facial recognition database. The company was requested to immediately stop receiving and using the personal data of residents of the UK. All illegally collected data should also be deleted soon. As a reminder, Clearview AI is promoting an application that can be used to upload a photo of someone in order to identify them by checking them against an illegal database. The company’s clients include numerous commercial and police organizations. However, the existing database has attracted watchful attention from US politicians and civil liberties organizations.

“Given the high number of UK internet and social media users, Clearview AI’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge,” the Information Commissioner’s Office  said.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post