Knowledge about artificial intelligence and its security needs to be constantly improved
Nist, July 29, 2021
The U.S. The Department of Commerce’s National Institute of Standards and Technology (NIST) solicits information from the public to be used in developing AI risk management guidelines. These actions are performed as part of the main step in managing risks created by artificial intelligence. Appearing in the Federal Register, Responses to the Request for Information (RFI) will be used by NIST to develop an artificial intelligence risk management framework (AI RMF).
“The AI Risk Management Framework will meet a major need in advancing trustworthy approaches to AI to serve all people in responsible, equitable and beneficial ways,” commented Lynne Parker, director of the National AI Initiative Office in the White House Office of Science and Technology Policy. “AI researchers and developers need and want to consider risks before, during and after the development of AI technologies, and this framework will inform and guide their efforts.”
Arstechnica, July 10, 2021
Cheating in online games is not uncommon. Fighting cheaters often depends on technology that ensuring the wider system that runs the game isn’t compromised. However, in the case of in many first-person shooters, players can often easily bypass all of these systems. With the help of a capture card and an “input emulation” device, and machine learning-based computer vision software running on a separate computer, players manage to bypass the manufacturer’s anti-cheating protections.
Alas, this situation once again underlines the fact that artificial intelligence technology cannot always be used exclusively for the good. It is becoming clear that the methods of external computer vision are becoming an advanced tool in the endless struggle of various scammers (or cheaters) and those who are trying to stop them.
Forbes, July 29, 2021
Incorporating the security of artificial intelligence systems into security threat modeling is an important part of self-security work for any company today, even if the organization is not actively seeking to incorporate AI into its workflows. According to Gartner, almost 30% of all AI cyberattacks by 2022 will imply training-data poisoning, model theft or adversarial samples, therefore, focusing on AI security is more relevant today than ever.
This article discusses the main methods of attacks on artificial intelligence, which include attacks on the model IT and attacks on the data used by the model, as well as ways to prevent them. To strengthen the security of your AI system, the author recommends paying attention to strengthening the robustness of your AI models, using API-based data parsing and filtering solutions, as well as encrypting models. In addition, it is important to pay attention to the issues of securing model containers and verifying hashes of models.