How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far
GoogleCloud, September 19, 2022
In the latest Google Cloud Security Podcast episode, Alex Polyakov was interviewed by Anton Chuvakin and Timothy Peacock, renowned Cybersecurity experts from Google. They discussed actual topics in Assessing and Defending AI.
Adversa has analyzed more than 2,000 papers about attacks on AI. Based on such global research, Alex shared the main insights and explained various scenarios of hacking AI systems, told about real-life incidents and future threats we will face in a few years, as well as the danger they pose to humans.
Alex clearly explained the difference between protecting AI and traditional enterprise applications, why traditional protection methods did not apply to AI systems, at what stage to audit the security of AI systems to get closer to creating reliable AI, and who should be responsible for the security of AI systems.
Listen to this fantastic podcast via the link.
Artist finds private medical record photos in popular AI training data set
ArsTechnica, September 21, 2022
Another incident happened last week. Lapin, a California artist, discovered her medical photos on the website «Have I Been Trained». This application gives various artists the opportunity to find out if the LAION-5B dataset contains their work. Imagine her surprise when after uploading a recent photo of herself she found a set of two medical photos of her face before and after surgery in 2013, which were only allowed for her doctor’s use.
Upon review of the records and photos provided by Lapin, it was confirmed that the LAION dataset did indeed contain medical images of her face, moreover, several thousand similar medical photos of people were found in this dataset. What is the probability that each of them has the same dubious legal and ethical status? There is speculation that many of these medical photos have been integrated into image synthesis models offered as a commercial service by companies such as Midjourney and Stability AI.
LAION is a non-profit organization with members around the world and data is widely used by AI researchers to train AI image synthesis models such as Stable Diffusion and Google Imagen. At the same time, LAION is a set of URLs pointing to images on the Internet, that is, LAION does not host the images themselves. Who then is responsible for the privacy of data and images collected from the Internet and subsequently included in the datasets on which AI systems are trained?
Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices
The Conversation, September 20, 2022
Deepfakes of spoken audio are around the corner, and attacks using recorded audio have already happened. Deepfakes, which have become feasible with the development of machine learning technologies in recent years, have brought with them a whole new level of uncertainty, especially regarding digital media.
An audio deepfake seems to be quite dangerous, especially given that people often communicate verbally, through voice messages or calls. And this significantly expands the possibilities of attackers. But, if in order to detect a video deepfake, scientists analyze visual artifacts, then this method is not applicable for audio deepfakes. Researchers at the University of Florida have developed a method based on measuring the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those constructed synthetically by machine learning systems.
Read more about this study done by the researchers and the relationship between deepfake audio recordings and the anatomical limitations of the human larynx in the full article via the link.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.