Towards Trusted AI Week 41 – AI Bill of Rights and AI Liability directive and Gartner advices on AI risk management

Secure AI Weekly admin todayOctober 12, 2022 220

Background
share close

The EU wants to put companies on the hook for harmful AI

Technology Review, October 1, 2022

Europe is actively working to prevent the release of dangerous AI systems. As a further step, a new bill, called the AI ​​Liability Directive, has been introduced to make it easier to prosecute companies that use artificial intelligence for causing harm. This bill is expected to become law in a couple of years and give consumers the opportunity to sue the company for damages, but only if consumers can prove that the company’s AI system harmed them.

Artificial intelligence technologies exist in all areas of our lives. However, their harm is well documented in research papers and day-to-day incidents. For example, social media algorithms spread misinformation, facial recognition systems are often highly biased. Artificial intelligence predictive systems used to claim or deny loans may be less precise for minorities, let alone the fact that AI can be hacked to make dangerous actions.

The AI ​​Liability Directive which will require additional checks on the use of “high-risk” AI that has the potential to cause the most harm to people, including law enforcement, recruitment, or healthcare systems will be a major addition to the EU AI Act. Opinions are divided, with tech companies believing it can have a dampening effect on innovation, while consumer activists say it’s not enough.

Read more about this bill and what is the difficulty for the consumer in the full article at the link.

Gartner covered the importance of Secure AI one more time

Gartner

The management of trust, risk, and security (and privacy) in AI has come to the fore. This is due to the emergence of AI regulations and guidelines in the US, EU, and China.

Security teams still do not pay enough attention to ensuring the security of AI use in their organizations, although they are interested in protecting AI. The point is that there is a lack of throughput, and most attacks continue to be undetected. The damage is primarily done through attacks and compromise of AI models, but the focus is mainly on the results, i.e. data leaks or systems being hacked.

Avivah Litan and Bart B. Willemsen published a Gartner research note providing a quick answer on how AI risk should be managed. Among other things, it was proposed to provide compliance with regulatory requirements and an across-the-board business approach to AI risk management by creating a cross-organizational working group responsible for AI trust, risk, and security, which Gartner calls AI Trust, Risk, and Security Management (AI TRiSM). AI TRiSM provides executives with the tools to manage secure and responsible AI.

Read the Gartner document at the link.

Blueprint for an AI Bill of Rights

The White House, October 4, 2022

In addition to the vast number of benefits that AI systems bring to our lives, frequently such tools are used to limit opportunities or prevent access to important services or resources, be it  uncontrolled data collection on social networks or discrimination in loan decisions. Nowadays, the big question facing democracy is whether to use technology, data, and automated systems without jeopardizing the rights of American society.

The US federal government, by presidential order, works to end inequality, ensure fair decision-making, and actively promote civil rights, equal opportunity, and racial justice in America. With the same goal, the White House Office of Science and Technology Policy has defined 5 principles for the development, use, and implementation of automated systems, which are reflected in the draft AI Bill of Rights.

Particular attention should be paid to the first and main principle, which is to ensure the security of such systems. In other words, all automated systems should definitely be tested before deployment, checked for risks and mitigated. In addition, constant inspection is needed to demonstrate their safety, mitigating unsafe results, including those that go beyond the designed use. 

The main message of this principle is that all automated systems should be designed so that they do not pose a security risk. Regulation implies reporting the safety and confirming the use of such systems, in addition to testing.

Read more about this and other leadership principles at the link in the full article.

This Tool Defends AI Models Against Adversarial Attacks

The New Stack, October 2, 2022

As AI models become more potent with an ever-increasing amount of machine learning applications in all areas of life. They could once revolutionize healthcare and, moreover, help us deal with severe problems such as the effects of climate change. 

Nonetheless, the more widespread the use of AI, the greater the number of errors, including unintentional ones. AI systems are not reliable. Unfortunately, mistakes can have disastrous repercussions.

Recognition algorithms are widely used to evaluate the biometric data of people, and these models can be easily fooled by changing the image. After all, black box of AI makes it difficult to determine why models make decisions or make mistakes, highlighting the importance of making models more robust. Typically, image recognition models are trained on a huge number of images, and changing the input image by just one pixel is able to bring the system. Adversa AI recently launched an AI hacking competition where contestants from all over the world demonstrated it one more time in practice.

This challenge is exactly what a group of researchers from Kyushu University in Japan is dealing with. They are developing a new method for evaluating how neural networks process unfamiliar elements during image recognition tasks. Their technique is called Raw Zero-Shot. It could be one of those tools that will help researchers identify the main features that lead to mistakes in AI models and, as a result, figure out how to create robust AI models.

Unfortunately as we know AI systems will need a combination of various defenses against different attacks such as this  as there is no one size fits all protection as we learned from Zero Trust.

Read more in the full article via the link.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post