Vulnerabilities of face recognition
Facial recognition systems can mistake a criminal for a user in case an adversary uses adversarial glasses, masks, bandages, patches, etc. There are well-known problems in facial recognition systems. The bias may lead to fraud, wrong prosecutions or other notorious events.
Issues of speech recognition
The misuse of voice biometrics increases with their popularity. Attacks against speech recognition can be launched with the use of malicious voice modification such as inserting patterns of white noise in the audio. They can lead to voice impersonation, fraud or fake voice-based content publication.
Incorrect face analytics
Recent advances in automated face analysis have attracted adversaries. Malefactors can implement imperceptible changes to some photos, and the detection of emotions, ethnicity, and gender can be manipulated.
AI Incidents
AI-based systems engaging biometric data can be fooled by a third party. For example, researchers have already managed to create such generative adversarial networks that are able to produce fake fingerprints, which look convincing to a human eye and can fool the system as well. Such prints didn’t have to match the full fingerprint, as many common fingerprint systems match a part of the fingerprint, which generally eases the whole attack method.
Other AI-based systems that deal with different types of biometrics can be spoofed in the same way as well. In addition, as in the case of face recognition cameras, some attack methods are aimed at preventing the system from recognizing a person’s physical data, mistaking him for another person.
Recently, the AI Red team developed a new attack on AI-driven facial recognition systems, which can change a photo in such a way that an AI system will wrongly recognize people.
How we can help with AI risk management
Our team of security professionals has deep knowledge and considerable skills in cyber security, AI algorithms, and models that underlie any content moderation system. Your algorithms can be tested against the most critical AI vulnerability categories that include Evasion, Poisoning, Inference, Trojans, Backdoors, and others.
We offer Solutions for Awareness, Assessment, and Assurance areas to provide 360-degree end-to-end visibility on the AI threat landscape.
- Secure AI Awareness to demonstrate AI risks and shape AI governance strategy. It consists of Policy Checkup, AI Risks Training Threat Intelligence for informed decisions;
- Secure AI Assessment helps to perform AI integrity validation and identify AI vulnerabilities through Threat Modeling, Vulnerability Audit, and automated AI Red Teaming;
- Secure AI Assurance helps to remediate AI risks and implement a lifecycle for AI integrity. It consists of Security Evaluation, Risk Mitigation, and Attack Detection.