Towards Trusted AI Week 37 – What are the security principles of AI and ML?

Secure AI Weekly admin todaySeptember 13, 2022 334

Background
share close

Cybersecurity Threats Loom Over Endpoint AI Systems

EETimes Asia, September 6, 2022

IoT systems have evolved to a high level of development and products are issued with certificates that guarantee the protection of intellectual property. Additionally, adversarial attacks are being carried out and new threats are penetrating safe zones.

Adversarial attacks target the complication of deep learning models as well as the statistical mathematics that underlie them. Such attacks are launched with the intent to create weaknesses to be exploited in the field. This leads to the leakage of parts of the model or training data, or to obtaining unexpected results. Why is it happening? Deep neural networks (DNN) are a “black-box” where the decision-making process in the DNN is not transparent.

Adversarial attacks have a major difference from regular cyberattacks. After all, for example, when traditional cybersecurity threats arise, security analysts have the opportunity to fix an error in the source code and document it in detail. However, in DNN there is no specific line of code in which something could be fixed, and this is one of the main complications.

With the TinyML development pipeline, training is done offline, typically in the cloud. Then the executable is written to the MCU and used via API calls. This kind of workflow requires two engineers: a machine learning engineer, and an embedded engineer. As a rule, they work in different teams, and the new security landscape can lead to confusion in the distribution of responsibility between different stakeholders.

Read more about risks during the AI TinyML development in the article following the link.

Introducing our new machine learning security principles

National Cyber Security Centre, UK

Artificial intelligence and machine learning systems continue to relentlessly spread into all areas of human activities. Naturally, everyone wants to ensure that the deployment of such systems does not pose a threat to their data nor personal safety.

Unlike any software, where the basis is clear and the knowledge of the system’s operation or its components is widespread, it is difficult and sometimes even impossible to understand the behavior of the model when using machine learning, where the system independently extracts information from the data. 

Therefore, machine learning components are usually not subjected to the same level of validation as standard systems, and machine learning vulnerabilities can be overlooked.

The NCSC admits the enormous benefits that good data science and machine learning can bring to society, not least in cybersecurity itself. In order to ensure that these benefits are implemented safely and securely, NCSC has developed special principles for the safety of artificial intelligence and machine learning. 

To find out these principles, read the article via the link.

VA piloting trustworthy AI checklists for new and existing projects

FedScoop, July 13, 2022

The Department of Veterans Affairs proceeds to test checklists to ensure the reliability of the artificial intelligence it uses. Several Presidential Innovation Fellows are working with National AI Institute staff to develop field research questions for existing AI projects.

The work of Veterans Affairs is based on the recommendation of the National AI Research Resource Task Force that funding and personnel are directed towards reliability research and the development of leading and reliable practices for working with data and models responsibly. At the same time, in order to increase transparency, the Pentagon has replaced the 3-star governing body with a 4-star Chief Digital and AI Office Governing Council.

The National AI Institute previously produced a voluntary checklist for emerging AI researchers. It tests at multiple VA medical centers and represents the nine ethical principles listed in the December 2020 Trustworthy AI Executive Order.

The planning checklist is based on the work of the VA’s National Center for Ethics in Health Care and the Food and Drug Administration and assists researchers to guarantee that training data is free from bias, as well as the privacy of AI project participants and veterans.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post