Towards trusted AI Week 7 – standards for AI in healthcare

Secure AI Weekly admin todayFebruary 23, 2021 60

Background
share close

The introduction of regulations and standardizations is one of the most important steps towards safe, secure and ethical AI


CTA intros new trustworthiness standard for healthcare AI

Healthcare IT News, February 18, 2021

Artificial intelligence has become widespread in health care. Artificial intel is a way to greatly facilitate the diagnosis and management of patients. At the same time, the public distrust expressed in relation to smart technologies is understandable, since AI is prone to biases.

Therefore, before shifting tasks requiring responsibility for the life and health of patients to artificial intelligence technologies, it is necessary to solve the issues of AI trust in this area.

The Consumer Technology Association (CTA) has recently released a new ANSI-accredited trustworthiness standard for healthcare AI. According to the association, the new standard known as ANSI/CTA-2090, “The Use of Artificial Intelligence in Health Care: Trustworthiness” is aiming to represent the “baseline to determine trustworthy AI solutions in health care.”

The consensus-driven standard was developed with the participation of more than fifty organizations.The purpose of its creation was to define the basic requirements for AI in healthcare and the criteria for trustworthiness. It is important that the issues of trust in AI are considered from the point of view of the end user – the doctor, the patient, the medical community, and so on.

Regulatory laws like these are of paramount importance to the medical community as they play a huge role in developing trust in AI in this area.

When bad actors have AI tools: Rethinking security tactics

The Enterprisers Project, February 16, 2021

With the help of artificial intelligence technologies, people have managed to transfer tasks to machines that can be performed by them without human assistance. However, along with all the benefits, artificial intelligence and machine learning are the keys to new attacks for malefactors. So, here’s what you need to keep in mind in order to defend against such threats.

Step one is to think like a criminal. What can be done with AI and ML technologies? First of all, you need to understand what systems and devices that use smart technologies are capable of. For example, smart cars are capable of recognizing road signs, which means that this ability can be influenced. You can create deceiving deepfakes or manipulate smart bots by influencing their behavior  and answers. You can interfere with the already existing machine learning algorithms and, for example, set it so that the application for sending automatic responses instead of standard phrases gives out some confidential information like bank card numbers. Roughly speaking, there are a lot of opportunities, you just have to think about what the smart systems around you can do. 

Step two is to think ahead. For example, at the moment there is a possibility of training AI algorithms against attacks. You can, for example, focus training on threat detection. In addition, you can assess the safety of models at the stage of their creation: resources are available to help assess the safety of a model at the stage of its development and make the necessary changes. In addition, do not forget that smart technologies themselves are good helpers in the fight against cybercriminals.

How a Chinese scientist in the US is helping defend AI systems from cyberattacks

The Star, February 15, 2021

Li Bo currently works as an associate professor at the University of Illinois at Urbana-Champaign and specializes on adversarial learning, the concept of which is based on putting AI against each other based on game theory.  Li’s main goal is to bring safety and trustworthiness to machine learning technology using human knowledge and logic.

Three years ago, Li Bo was busy with her postdoctoral research at the University of California, Berkeley. At that time, the Chinese scholar, her team and she came up with a method of fooling autonomous vehicles.   In the course of the study, the team was able to put new patterns to road signs  that alter its recognition by the machine, but unchanged for human eyes. According to Li herself, the experiment showed people how important the security of smart technologies really is.

“Right now, AI is facing a bottleneck because it is based on statistics. It will be smarter if it uses logical thinking like humans to predict and learn if it is under attack,” explained Li Bo. 

Written by: admin

Rate it
Previous post