Towards Trusted AI Week 25 – How Tech companies Run AI Red Teaming
AI Red Teaming – Before using any technology, you must make sure it is safe and secure Israel presents a new robotic combat vehicle raising questions about AI security i24NEWS, ...
Secure AI Weekly admin todayJune 28, 2022 135
Machine Learning models not as secure and safe as it might seem at first glance
Microsoft, June 21, 2022
AI is increasingly seeping into all areas of human lives. So, it is essential to ensure that AI systems and their structure are both trusted. Microsoft has published the latest Microsoft Responsible AI Standard as there is a greater need for practical guidance on building AI systems. This is not their first activity, we already covered multiple initiatives in our digests. Thus, there is a clear understanding that industry leaders take Trusted AI seriously.
Microsoft Responsible AI Standard is a significant step towards the development of better and more reliable AI. In this document, Microsoft shares new information and plans seeking to get feedback from others and contribute to the discussion on creating AI best practices. It outlines the best ideas for building AI systems, supporting these values and, as a result, deserving trust. It provides specific practical advice for Microsoft teams that goes beyond general principles and have so far prevailed in the field of AI.
There is a widespread and rich global dialogue on how to create such principles and actionable norms for organizations to act responsibly in the development and deployment of AI. This discussion is helpful and drives Microsoft to continue contributing to it. New fences for AI are required to achieve a fairer future. And Microsoft’s Responsible AI Standard serves this mission.
Industry, academia, civil society and government need to join, share their expertise, and advance modern technology all together. We have to answer open-ended research questions, bridge measurement gaps, and develop new methods, models, resources, and tools through concerted efforts.
Read more about Standard via the link
LAProgressive, June 25, 2022
An increasing number of research papers about attacks and defenses on AI are being covered. This time, the story is about a team of researchers at UC Riverside’s Bourns College of Engineering. They are working to prevent attacks on computer vision systems. For this purpose, experts primarily have to figure out which attacks are successful. Thus, understanding how to launch successful attacks, helps to develop more effective defense arrangements against them.
Achievements in computer vision and machine learning have made it possible to perform complex tasks with little or no human intervention. A huge number of applications and robots ranging from self-driving cars to product fabrication, make important decisions using visual information. Cities more frequently trust automated technology for public safety and infrastructure maintenance.
Nonetheless, between human vision and computers, there is a huge difference. Computers have a kind of tunnel vision, which leaves them defenseless to potentially catastrophic attacks. They just get stuck on tiny deviations from expected data. Meanwhile, the human mind can filter out all types of unusual or extraneous visual information when making decisions. Why? It’s straightforward. It comes as no surprise that the brain is complex. It can simultaneously process past experience and data piles making instantaneous decisions fit to the challenge. Computers rely on mathematical algorithms trained on specific datasets, their judgments are limited with technology, mathematics and human foreknowledge.
If the way a computer “sees” an object (or the object itself, or any aspect of the software associated with machine vision technology) is changed by attackers, this can lead to disastrous incidents for individuals, cities or companies. There are types of attacks that can manipulate the decisions a computer makes about what it recognizes.
The principal investigator on DARPA AI Explorations program, Roy-Chowdhury, said: “People would want to do these attacks because there are lots of places where machines are interpreting data to make decisions… It might be in the interest of an adversary to manipulate the data on which the machine is making a decision. How does an adversary attack a data stream so the decisions are wrong?”
Read more about the computer vision following the link
So we do agree. If you want to see more examples of how exactly hackers can fool AI, and you can read our digest the best methods.
Forbes, June 15, 2022
Adversa AI Co-founder and CEO Alex Polyakov has published a new article on a vital but underestimated problem – that AI algorithms are already resulting in real threats to people’s psychological being and can lead to traumas depression jail or even suicide.
We all know that sometimes AI errors can also cause significant financial losses and manipulation of the financial market by creating fake content, which is then analyzed using sentiment-based stock predictions.
But this is not the worst-case scenario. Sometimes algorithms may damage people. Researches and real-life cases demonstrate that AI algorithms are responsible for occasions like family breakdown, depression, trauma, and imprisonment, or which is worse, murder or suicide.
Is there anything we can do to minimize the harm from AI and prevent another AI winter? We should understand that we are building a new creature. And it may have superior power. There is no doubt that it can facilitate routine procedures and optimize human resources, however, if we don’t train it correctly from the onset, it can make things worse.
To create trustworthy AI, we should create a learning environment where AI is “taught” to be safe, private, secure, unbiased and responsible.
Read the detailed article of our CEO, Alex Polyakov, with real-life issues via the link.
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.
Written by: admin
Secure AI Weekly admin
AI Red Teaming – Before using any technology, you must make sure it is safe and secure Israel presents a new robotic combat vehicle raising questions about AI security i24NEWS, ...
Adversa AI, Trustworthy AI Research & Advisory