Towards Trusted AI Week 25 – How Tech companies Run AI Red Teaming

Secure AI Weekly admin todayJune 23, 2022 735

Background
share close

AI Red Teaming – Before using any technology, you must make sure it is safe and secure


Israel presents a new robotic combat vehicle raising questions about AI security

i24NEWS, June 13, 2022

At a defense exhibition in France on Monday (13/06), the development of the Defense Ministry together with the country’s defense industry was presented. This is a new medium robotic combat vehicle (MRCV), which minimizes human losses in the case of a conflict. But there is an alert coming from a tech blogger warning about the cybersecurity risks associated with hostile entities hacking fully automated tanks.

It is expected that MRCV will save many Israeli lives in case of a military conflict. This medium-sized autonomous vehicle is much smaller than a usual tank, as it does not carry people. But this is not an obstacle to ferrying heavy loads. The vehicle operates autonomously and is capable of detecting and destroying incoming threats along with fire control and emission control systems. Also, the vehicle has the ability to control a drone and has a built-in system for transporting and receiving unmanned aerial vehicles (UAVs).

“If this tank is hit and taken out of service, that’s too bad but it’s only money, no lives are lost.”

Tech blogger and IT journalist, Mike Waizman, said that the Israeli concept of tanks was always aimed primarily at saving lives. And the new vehicle does it. However, due to the fact that the tank’s operation is fully automated, there are cybersecurity risks that such tanks can be hacked by hostile entities.

We also agree that taking into account multiple ways of how AI algorithms can be bypassed, it’s critical to ensure that such attacks isn’t possible for military vehicles.

Read more about MRCV in the article via the link in the title.

How Microsoft and Google use AI Red Teams to “stress-test” their systems

Emerging Tech Brew, June 14, 2022

Starting from 2019, major tech companies have implemented AI Red Teams to uncover weaknesses, biases, and security flaws of their machine learning systems. 

AI Red Teams are a kind of innovation in AI. The term “red team” can be referred to the 1960s military simulations used by the Department of Defense. It is now ubiquitous in cybersecurity, where internal IT departments have to think like potential adversaries to reveal vulnerabilities of the system. Since 2019, major tech companies such as Microsoft, Meta, and Google and startups focused on AI Safety and  Security such as Adversa AI have implemented their versions of AI Red Teams, intending to uncover weaknesses, biases, and security flaws in their machine learning systems.

This is part of a global push for AI ethics and governance in recent years on corporate boards. Gartner has named “smarter, more responsible, scalable AI” as its top data and analytics trend for 2021. In 2021, Twitter introduced the Machine Learning Ethics, Transparency, and Accountability Team, which led the first-ever “bug bounty” for AI bias. Last year, Google made a promise to double its staff of AI ethics researchers and increase funding. In February, senators presented a new version of the Algorithmic Accountability Act.

What is the strategy of the AI Red Team? Tech companies vary in pattern. For example, the Google team first evaluates product use cases, intended and unintended users, and fairness considerations. They then form a methodology to stress-test the product in terms of “sensitive characteristics” such as the user’s sexual orientation, gender, or religion. The Microsoft team, in turn, is led by the Microsoft AI Red Team Board, made up of executives throughout different parts of the company, and is focused on a descending AI risk assessment that attacks at the case from different angles.

Read more about AI Red Teams via the link, and feel free to contact Adversa AI Red team experts who were among the pioneers to offer AI Red Teaming as a service.  

Facial Recognition Is Out of Control in India – AI Red Teaming is essential

Vice, June 13, 2022

The country is actively using biometric data against activists and “hooligans”, but the scale of this data collection and the technology itself threatens human rights. Equally challenging is the question of where facial recognition data will end up and who will be responsible in case of  an attack and compromise made by hackers.

In April, Indian President Ram Nath Kovind signed the Criminal Procedure (Identification) Act into law to make the criminal justice system “more effective” by linking it to technology. This law says that any person involved in a criminal investigation, including the unsuspected, will have to to provide their biometric identifiers, such as fingerprints and palm prints, iris and retinal scans, biological samples, and behavioral signs like handwriting.

The Hyderabad City Police Department is known for the use of facial recognition systems for a variety of purposes, including questionable cordon and search operations, phone-searching activities, and profiling people for drugs.

Anushka Jain, an associate Counsel for Transparency & Right to Information at the Internet Freedom Foundation, said that if there was a mismatch, the facial recognition system produced a false positive or negative result: “In the first example, an innocent person is accused. In a false negative, the system cannot identify a person. If facial recognition technology is used to verify Aadhar or Voter ID, the system may fail to identify the individual, barring them from accessing government services or benefits.

The Asia Pacific Policy Counsel at Access Now, Namrata Maheshwari, said that due to the massive sprawl of the system, it was not clear where facial recognition data would end up and who would be responsible if it was compromised by hackers in an attack. He comments: “If a cybersecurity incident happens, who is in charge of it? Whose duty is to prevent it? And who do you go to when something goes wrong?”, “That’s not clear, Whether it is the vendor itself or a criminal hacker working in another nation, your personal data may be readily accessed by bad actors or other governments because of the free-for-all system that we now have here in India.”

Read more at the link in the title.

This is another example of risks related to facial recognition systems. If you are interested in how exactly facial recognition systems can be hacked, please check out our Adversarial Attack ART Exhibition. If you want to practice hacking it, you can participate in the new competition on hacking Facial Recognition that Adversa AI is hosting as a part of MLSES event.

 

Subscribe for updates

Stay up to date with what is happening in AI Red Teaming ! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post