Towards Trusted AI Week 27 – Alas, Two Security Incidents with AI in one week!

Secure AI Weekly admin todayJuly 5, 2022 262

Background
share close

Concerns about the security of AI systems are on the rise


GitHub Copilot works so well because it steals open-source code and strips credit

TheNextWeb, July 1, 2022

Recently, the real incident involving Model Extraction attack on AI occurred. Critical data of the training dataset was extracted from the trained AI model.

Microsoft and OpenAI trained an AI system named Copilot on data that was released under an open source license. In this regard, the Software Freedom Conservancy (SFC), a non-profit community of open source advocates, announced its withdrawal from GitHub and called on members and supporters to reproof the platform once and for all.

This is not the first time when large language models expose data used in training. Previously, the same happened with GPT-3 and disclosed developers private keys.

Microsoft uses people’s work, takes someone else’s code, takes them out of credit and sells it to others via algorithms. It goes like this: in 2018, Microsoft acquired GitHub. Since then they have used its position as the OpenAI main sponsor in a collaborative effort to create the Copilot AI system. Meanwhile, the only way to get access to the Copilot system is either a paid subscription or a special invitation from Microsoft.

The Software Freedom Conservancy and other open-source advocates are concerned that Microsoft and OpenAI are inherently monetizing someone else’s code and depriving those who apply that code of the opportunity to do justice.

Read about opportunity of solutions in the full article 

Cruise robotaxis blocked traffic for hours on this San Francisco street

TechCrunch, June 30, 2022

Several Cruise robotaxis stopped functioning and stood on a street in San Francisco at the intersection of Gough and Fulton streets, blocking traffic for a couple of hours until employees arrived and moved the autonomous vehicles with their own hands.

The accident comes just days after Cruise launched its first commercial driverless robotaxi service in the city. These Cruise vehicles operate without a human safety operator at the wheel between 10 pm and 6 am only on certain streets. 

A Cruise’s representative commented on this situation: ” We had an issue earlier this week that caused some of our vehicles to cluster together“. ” While it was resolved and no passengers were impacted, we apologize to anyone who was inconvenienced“, he said.

But this is not the first incident of Cruise cars. Three months ago, a police officer stopped a Cruise car due to a headlight malfunction. The vehicle stopped when it was given a signal. When the policeman tried to open the driver’s side door, the car drove off and only stopped a little further down the road and activated its hazards.

We can see that self-driving cars can and make errors in such a way that it leads to human life losses, car crashes and traffic jams. It becomes clear that cybercriminals can start the same scenarios intentionally by using vulnerabilities in AI algorithms such as those we mentioned earlier in our research digests like this

Adversarial Machine Learning Poses a New Threat to National Security

The CyberEdge, July 1, 2022

AI is a rapidly growing area of technology that has irrevocably entered our lives. Reams of information have been published about various incidents related to finances, business damages, crashes, misdiagnosis. But what if incidents take place on a more global scale, as AI is being actively implemented and used, including the defense and military industries. How to protect machine learning systems from adversarial attacks?

You can imagine the following situations: an explosive device, an enemy fighter jet, and a rebel group are mistakenly identified as a cardboard box, a bird, or a flock of sheep, respectively. A lethal autonomous weapon system erroneously interprets combat vehicles as enemies’ combat vehicles. Satellite images of a group of children in a schoolyard are misidentified as moving tanks. In any of the scenarios described, the consequences of an inconsiderate action are terrible. Initially, ML methods were not designed to compete with intelligent adversaries. For this reason, their own characteristics perform their utmost risk in this class of applications. Thus, a small perturbation of the input data, be it pixel-level changes or “seeing” images in noise, is enough to threaten the accuracy of machine learning algorithms and make them vulnerable to attackers. 

AI is a rapidly growing area of ​​technology with major national security implications. The USA and many other countries are developing AI applications for a range of military functions. AI research is being conducted in various domains: intelligence gathering and analysis, logistics, cyber operations, information operations, command and control, as well as various semi-autonomous and autonomous vehicles. Undoubtedly, the issue of safe and trusted AI is a top priority.

Adversa AI Red Team is a team of professionals in issues of researching and testing AI systems for stability to adversarial attacks

Adversarial machine learning explained: How attackers disrupt AI and ML systems

CSO, June 28, 2022

It is known that more and more companies worldwide are implementing AI (artificial intelligence) and ML (machine learning) systems, so the issue of securing and protecting them is gaining momentum. However, securing AI and ML systems is fraught with posers, since some aspects are new, such as adversarial machine learning.

It is noteworthy that adversarial machine learning is not a type of machine learning, but a set of techniques used to attack AI and ML systems. Alexey Rubtsov, senior research associate at Global Risk Institute and a professor at Toronto Metropolitan University, says: “Adversarial machine learning exploits vulnerabilities and specificities of ML models.”

According to the article, adversarial machine learning attacks can be divided into four main categories:

  1. Poisoning, in which the adversary manipulates the training data set;
  2. Evasion means that the model is already trained, but the attack is able to slightly change the input;
  3. Extraction implies that an attacker gets a copy of your AI system;
  4. Inference attack refers to a type of attack, where attackers find out which set of training data was used to train the system and exploit vulnerabilities or inaccuracies in the data.

Some vendors have begun to release various solutions  to help companies secure their AI systems and guard against adversarial machine learning, and Adversa AI is among them. Contact us if you need assistance to deal with AI threats.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post