AI risk management for Identity verification (KYC)



AI benefits of the KYC industry

KYC stands for “know your customer”, and it  is the term for banking and exchange regulation for financial institutions and bookmakers, as well as other companies working with private money. To achieve these goals, artificial intelligence systems are widely used as they help to perform overall monitoring and fight illegal transactions. AI-based KYC tools can  perform document verification in a few seconds to check the identity and address of a person.


The flip side: AI risks

Despite the fact that KYC and AML systems are aimed at reducing various financial misconceptions, attacks on related devices are far from uncommon. The most frequent goal of an attacker of such smart systems is to try to force the system to recognize one person for another, as well as to provide incorrect information using attacks on these systems, it is possible to hide illegal transactions, data about a business or its owners.

In addition, such systems can give an attacker access to a huge amount of financial data about the people and organizations with which the smart system works.


Inaccurate identity verification

There can be imperfections in the process of verifying the identity in the distance in a digital and online way. Attackers cannot but exploit these flaws. As a result, facial recognition systems can misidentify and approve a malefactor instead of a real client. If victims deal with targeted impersonation or identity fraud, they may suffer huge losses.


Fake document verification

In case of document manipulation automated integrity checks can approve a fake ID. This allows uploading fake documents, hiding the real identity or impersonating another person. Banks can therefore support money laundering unintentionally, immigration or tax evasion due to forged document verification.


Fake voice authentication

Voice authentication may be broken, this is no longer news. Adversaries can create life-like synthesized voices or modify the existing voice to bypass voice authentication methods. Consequently,  the speech recognition system will approve a fake voice.


Fraud detection failure

Anti-fraud systems can be tricked as well. Attackers make these systems misclassify malicious activity as benign. This can be done in case of stealth evasion attacks and may inevitably breed fraudulent actions.


Insider compliance

Compliance monitoring system can miss suspicious emails and messages if malicious actors apply advanced techniques of evasive text modifications.


LLMs and AI Chatbots

AI language models and chatbots such as GPT, Claude are vulnerable to various attacks: prompt injections, jailbreaks, data stealing, adversarial examples, and other safety bypass techniques.


AI incidents

Bypassing identity, document verification or voice authentication is not new. In 2018, researchers demonstrated a proof-of-concept using Siri and Microsoft’s Speaker Recognition API, synthesized the target’s voice and were able to successfully authenticate with a Windows service.It was a raspy sounding voice with all the necessary to trick authentication.


How we can help with AI risk management

Our team of security professionals has deep knowledge and considerable skills in cyber security, AI algorithms, and models that underlie any content moderation system. Your algorithms can be tested against the most critical AI vulnerability categories that include Evasion, Poisoning, Inference, Trojans, Backdoors, and others.

We offer Solutions for  Awareness, Assessment, and Assurance areas to provide 360-degree end-to-end visibility on the AI threat landscape. 

  • Secure AI Awareness to demonstrate AI risks and shape AI governance strategy. It consists of Policy Checkup, AI Risks Training Threat Intelligence for informed decisions;
  • Secure AI Assessment helps to perform AI integrity validation and identify AI vulnerabilities through Threat Modeling, Vulnerability Audit, and automated AI Red Teaming;
  • Secure AI Assurance helps to remediate AI risks and implement a lifecycle for AI integrity. It consists of Security Evaluation, Risk Mitigation, and Attack Detection.