AI risk management for biometrics



AI benefits for biometrics

Biometrics is an industry relying on AI a lot. It can be considered a part of the surveillance industry as the method of collecting data for safety considerations and for identification of people who are on the criminal investigation. Biometrics can also be discussed on its own as making any measurements of  human body characteristics: these are widely used as a form of access ID management and access control. 

In both cases biometrics implies collecting certain  distinctive, measurable body features belonging to a certain person. Such characteristics can include a three-dimensional photograph of a face or body, a special photo of the cornea of ​​an eye, a drawing of veins, a voice sample, fingerprints, blood type, etc. Artificial intelligence technologies help process and analyze this information, recognizing people whose characteristics are already stored in the database; it is possible to identify people, as well as collect data on people not previously entered into the database.


The flip side: AI risks

Despite the obvious advantages of collecting such information about people, biometric data is potentially vulnerable to fraudulent actions. As in the case of surveillance cameras, AI-based systems engaging biometric data can be fooled by a third party.


Vulnerabilities of face recognition

Facial recognition systems can mistake a criminal for a user in case an adversary uses adversarial glasses, masks, bandages, patches, etc. There are well-known problems in facial recognition systems. The bias may lead to fraud, wrong prosecutions or other notorious events.


Issues of speech recognition

The misuse of voice biometrics increases with their popularity. Attacks against speech recognition can be launched with the use of malicious voice modification such as inserting patterns of white noise in the audio. They can lead to voice impersonation, fraud or fake voice-based content publication.


Incorrect face analytics

Recent advances in automated face analysis have attracted adversaries. Malefactors can implement imperceptible changes to some photos, and the detection of emotions, ethnicity, and gender can be manipulated.


AI Incidents

AI-based systems engaging biometric data can be fooled by a third party. For example, researchers have already managed to create such generative adversarial networks that are able to produce fake fingerprints, which look convincing to a human eye and can fool the system as well. Such prints didn’t have to match the full fingerprint, as many common fingerprint systems match a part of the fingerprint, which generally eases the whole attack method.

Other AI-based systems that deal with different types of biometrics can be spoofed in the same way as well. In addition, as in the case of face recognition cameras, some attack methods are aimed at preventing the system from recognizing a person’s physical data, mistaking him for another person.

Recently, the AI Red team developed a new attack on AI-driven facial recognition systems, which can change a photo in such a way that an AI system will wrongly recognize people. 


How we can help with AI risk management 

Our team of security professionals has deep knowledge and considerable skills in cyber security, AI algorithms, and models that underlie any content moderation system. Your algorithms can be tested against the most critical AI vulnerability categories that include Evasion, Poisoning, Inference, Trojans, Backdoors, and others.

We offer Solutions for  Awareness, Assessment, and Assurance areas to provide 360-degree end-to-end visibility on the AI threat landscape. 

  • Secure AI Awareness to demonstrate AI risks and shape AI governance strategy. It consists of Policy Checkup, AI Risks Training Threat Intelligence for informed decisions;
  • Secure AI Assessment helps to perform AI integrity validation and identify AI vulnerabilities through Threat Modeling, Vulnerability Audit, and automated AI Red Teaming;
  • Secure AI Assurance helps to remediate AI risks and implement a lifecycle for AI integrity. It consists of Security Evaluation, Risk Mitigation, and Attack Detection.


Drop us a line!

Have doubts about the security of biometrics, worry about the trustworthiness of the whole industry or the reputation of your business? Please write to us!