Is AI Ready for Surgery?
Science-fiction writers are fond of using artificial intelligence (AI) as the antagonist in their stories. From the “Terminator” franchise to newer entrants in the genre like “Ex Machina,” losing control ...
Secure AI Weekly admin todayJuly 12, 2022 242
Adversarial machine learning and artificial intelligence are a concern in the information security community.
Bloomberg, June 30, 2022 (Updated on July 4, 2022)
A study published in Nature Human Behavior has found that a new computer algorithm that claims to be 90% accurate can predict crime in major cities by subdividing it into 1,000-square-pound areas. The model identifies patterns over time in these tiled areas and attempts to predict future events.
Historical data on violent crimes and property crimes in Chicago were used to conduct the study and validate the model. The study also showed that it worked just as well, based on data from other large cities such as Atlanta, Los Angeles, Philadelphia.
The new model contrasts with previous forecasting models that portrayed crime as emerging from “hotspots” spreading to nearby areas. According to the report, this approach leaves room for bias, as it overlooks the complex social environment of cities and the nuances of the relationship between crime and the effects of police enforcement. In addition, according to professor of linguistics at the University of Washington Emily M. Bender, this study ignores fraud and environmental crime.
Ishanu Chattopadhyay, Assistant Professor of Medicine at the University of Chicago and senior author of the study, says: “It is hard to argue that bias isn’t there when people sit down and determine which patterns they will look at to predict crime because these patterns, by themselves, don’t mean anything”. “But now, you can ask the algorithm complex questions like: ‘What happens to the rate of violent crime if property crimes go up?”
Sence AI can make mistakes and can be hacked, we at Adversa believe that AI algorithms should be implemented carefully in vital areas.
TechTarget, July 6, 2022
A research paper authored by Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova, and Pavel Laskov “Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples” published on July 4, raised question of the security measures put in place in 5G networks. A team of academic researchers have unveiled an attack method that could potentially threaten 5G networks. And this requires new ways to protect against adversarial machine learning attacks.
Academic researchers from the University of Liechtenstein say that even with advanced security defenses and zero insider knowledge of the part of an attacker, a surprisingly simple network interference strategy can disrupt traffic on next-generation networks. As the research team states, the key to the attacks is the use of an adversarial machine learning (ML) technique that does not rely on any prior knowledge of the target network.
As 5G networks evolve, existing network packet management methods no longer maintain, and many carriers plan to use machine learning models that can help with sorting and prioritizing traffic. These machine learning models have proven to be an accessible point for attacks, since their confusion and redirection of their priorities allow attackers to change the way traffic is processed. The researchers believe that the main idea is to make small changes to the data set. Over time, poisoned requests of data packets can change system behavior, interfere with legitimate network traffic and, as a result, slow down or stop data flow.
Adversa AI Red Team have already seen a number of research papers in this area therefore can confirm that the threat is becoming a reality especially in the light of wars or civil protests.
FastCompany, July 8, 2022
AI has failed in terms of many expectations. Rising incredulity about AI presents us with an option. We can stay too shameless and watch aside as winners emerge, or we need to find a way to filter out the noise and pre-determine commercial breakthroughs and take advantage of historic and economic opportunities.
Distinguishing near reality from the view of science fiction is easy. This can be done by leveraging the most important stage of maturity for absolutely any technology: its ability to handle unexpected events, known as edge cases. As the technology matures, it becomes more connoisseur at handling edge cases and gradually unlocks new possibilities.
For AI, the best measure of edge case reliability is its accuracy. When the AI fails to handle an edge case, it forms a false positive or a false negative result. Precision is a metric that measures false positives, while recall measures false negatives. Today’s AI attains high performance only if it focuses on one point – accuracy or recall. When it comes to attaining high performance in both cases at the same time, AI models face troubles. Solving this problem remains the holy grail of AI.
Adversa AI Red Team experts agree with this idea and emphasize that adding another example of edge cases are adversarial examples. Currently AI algorithms can be better than humans in good cases. But when it comes to unusual cases and especially those which are specifically crafted by malicious actors, cuddent AI solutions fail dramatically.
P.S For those who want to get an overview of particular attacks on AI, which already exist, we also recommend reading the whitepaper “Practical Attacks on Machine Learning Systems”. It collects notes and research projects conducted by the NCC Group on the topic of the security of machine learning (ML) systems. This document details specific practical attacks and common security issues, includes some general background information on the broader topic of ML, and provides some insights in the development platforms and processes.
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.
Written by: admin
Articles admin
Science-fiction writers are fond of using artificial intelligence (AI) as the antagonist in their stories. From the “Terminator” franchise to newer entrants in the genre like “Ex Machina,” losing control ...
Adversa AI, Trustworthy AI Research & Advisory