Towards Trusted AI Week 48 – learning from Zillow-pocalypse, and others

Secure AI Weekly admin todayDecember 6, 2021 158

share close

Adversarial attacks are posing a real threat to the current AI state 


What Every Machine Learning Company Can Learn from the Zillow-pocalypse

Medium, November 26, 2021

Zillow lost $ 381 million after data science model failed.

The company said last week that it would have to lay off 25% of its employees, and the company also decided to sell more than 7,000 homes with total assets in excess of $ 2.8 billion. This situation was a great example of how machine learning models can lead to terrible errors when mismanaged.

The article examines the sequence of events that led to such a critical situation, such as bad decisions and structural deficiencies. This should serve as a lesson for both the company and all other companies using machine learning.

For example, one of the company’s key mistakes was to apply the premature machine learning model to one of the most challenging areas in existence. Machine learning models solve the pricing problem in some markets, however markets such as residential real estate remain very complex, and even with the scale of Zillow data, we still don’t have enough data to cope with all the other cases.

 As for the future, we must not forget that corporate governance must move to model governance. Moving from listing specific frameworks or technologies in job descriptions to detailing the methodologies or types of models that are appreciated by the team could help a lot. Finally, it becomes clear that we need to stop using machine learning as a universal tool to solve any problem, and instead take on a more flexible but selective view of its application.

Adversarial image attacks could spawn new biometric presentation attacks

Biometric Update, December 1, 2021

A new study by the University of Adelaide, South Australia and discovered by Unite.AI found that the use of artificial intelligence in the field of biometrics is not as safe as it might seem: new security risks have been identified associated with malicious image attacks on object recognition algorithms, with possible implications for facial biometrics.

The researchers created a series of processed images of flowers. According to the researchers, they can effectively use the main weakness of the entire current architecture for the development of artificial intelligence for image recognition. Because they are easy to port across multiple model architectures, images can affect any image recognition system, regardless of datasets and models. This could open the way for new forms of biometric identification fraud.

A video has already appeared on the web, it was presented on the project’s Github page, which demonstrates how incorrect identification of faces occurs as a result of an attack using an adversarial presentation. From a technical point of view, these images are very simple to create – they require images from a specific dataset that trained computer vision models.

Why Adversarial Image Attacks Are No Joke

Unite AI, December 1, 2021

A new study from Australia concludes that the accidental use of highly popular image datasets for commercial AI projects could create a new security problem.

For several years, a group of scientists from the University of Adelaide has been studying something very important regarding artificial intelligence-based image recognition systems. In one of the videos of the team, the flower is classified as President Barack Obama. The attacked facial recognition system clearly knows how to recognize Barack Obama, is deluded – it is 80% sure that the anonymous man holding the processed printed adversarial image of the flower is also Barack Obama. 

Such an attack is impressive, and once again brings our attention to a very important topic. Adversarial image attacks are made possible not only by open source machine learning practices, but also by the corporate culture of AI development. It is she who is motivated to reuse well-proven computer vision datasets as they have already proven to be effective, much less expensive than starting from scratch, and are maintained and updated by leading minds and organizations in academia and industry. In addition, in many cases where the data is not original, the images were collected prior to the recent controversy over privacy and data collection practices. As a result, old datasets are left in a semi-legal purgatory, which can look comfortingly “safe haven” from a company’s perspective.

Written by: admin

Rate it
Previous post