Towards Trusted AI Week 20 – the EU is working on the most important AI law and others

Secure AI Weekly admin todayMay 19, 2022 65

Background
share close

Smart systems still need our control and laws to limit them


Top GitHub libraries for building explainable AI models

Analytics India Mag, May 11, 2022

Explainability is a necessary element in creating a responsible AI that can be called trustworthy. GitHub hosts several libraries focused on explaining black box models, auditing model data, and creating transparent models to name a few.

iModels combines a range of techniques to provide transparent and accurate predictive modeling. The Python library was developed by the University of California at Berkeley. A simple, unified interface for modern, interpretable modeling techniques makes work easy.

Captum is next in line and is Facebook’s PyTorch model interpretability and understanding library. The library contains many modern interpretability algorithms that interact with PyTorch, which will be a good tool for researchers to work with. Inside you will find a selection of generic implementations of integrated gradients, saliency maps, smooth scoring, wargrad and more for PyTorch models.

LIME is another one here that stands for Local Interpretable Model-agnostic Explanations for ML models. It is a technique developed for explaining the predictions of ML classifier and evaluating its properties in terms of trust-related issues. According to the developers, the tool is able to give interpretations for almost any black-box classifier with two or more classes.

Another one on our list is Aequitas, which is an open-source bias audit toolkit. This one will be useful if you are a data scientist, researchers, or policymaker. Aequitas is presented by specialists at the Center for Data Science and Public Policy, University of Chicago giving its users an opportunity to test models for biases and fairness metrics.  The toolkit also helps to perform an audit of machine learning models for bias. 

Check out the full article to find out more interesting and useful libraries.

A quick guide to the most important AI law you’ve never heard of

MIT Technology Review, May 13, 2022

This is not the first time we have heard of new laws aimed at controlling the use of AI – this time the European Union is working on a new law aimed at minimizing the dangerous harm associated with artificial intelligence.

In the modern world, AI applications are increasingly being used to make important decisions about people’s lives in practice – and the problem is that control over the applications themselves at this point becomes minimal. It is perhaps obvious that such a course of events can have terrible consequences – for example, financial ruin, the conviction of the innocent, medical errors. Smart systems fail especially when it comes to women, marginalized groups and people of color.

So, the new AI Act is very broad and needs more scrutiny for the use of “high risk” AI – that is, precisely the ones where errors can most harm people – such as exam scoring systems, hiring employees, or helping judges decide on the law. The law also addresses prohibitions on ethically unacceptable non-use of AI.

The law will affect the use of facial recognition in public places – a number of countries even advocate a complete ban on the use of this technology in public places by both law enforcement agencies and private companies.

If the European Union succeeds in implementing the law, it will be a critical step towards some U.S. states and cities such as San Francisco and Virginia have imposed restrictions on facial recognition, but the EU ban will apply to 27 countries with a population of more than 447 million .

Does Kendrick Lamar Run Afoul of Copyright Law by Using Deepfakes in “The Heart Part 5”?

The Hollywood Reporter, May 12, 2022

In the new video, the famous rapper turns into various celebrities using deepfake technology, including Will Smith and Kanye West. The use of AI in the new video is not currently subject to copyright law – but it is only possible for now.

In the new music video, the musician uses deepfake technology to transform into Will Smith, Jussie Smollett, O.J. Simpson, Kobe Bryant and Nipsey Hussle. Since celebrities probably didn’t give their consent to the video, we might wonder if Lamar and the production house he collaborates with are legal for using their likeness.

Since the technology is relatively new and has not yet been widely used in the film and video industry, there are currently no strict laws that control and restrict its use.

“Looking at it from the perspective of how [the deepfake] was used in the creative process, you have to focus on the different meaning and message that the resulting use ends up communicating,” commented Aaron Moss, chair of Greenberg Glusker’s litigation department.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post