Dataspike Logo

Improving KYC

improving KYC.png

Know Your Customer (KYC) is essential to the modern anti-money laundering framework. KYC guidelines are designed to prevent banks and other regulated companies from being used for illicit activities and serve as a starting point for other anti-money laundering measures. Yet, even though most people understand the underlying rationale behind KYC, it often becomes the worst customer experience in finance.

A poorly automated KYC can take several days, including office appointments and additional paperwork. Compliance officers' decisions are subject to individual racial, gender, or other biases. Some customers consider KYC a violation of privacy, while others are reasonably concerned about data safety. Finally, there are conspiracy theorists and crypto-anarchists afraid of mass surveillance. People also disapprove of implementing KYC over emerging financial instruments like cryptocurrencies (however, this is not the case).

Modern KYC isn’t that bad, but it’s not great either. Here, we will consider what can be done to make KYC faster, less error-prone, and more comfortable for users in the future.

One-step KYC solution with eID.

Electronic identification cards store basic personal identification information, sometimes biometric data (such as photos for automated visual identification or fingerprint data), and special credentials that allow for reliable corporate or state services authentication. An eID card with an integrated NFC chip can be read using a smartphone or specialized reader, allowing in-person or remote identification.

National legislation for eID evolves, allowing for streamlined KYC. For instance, the comprehensive legal basis for the remote use of digital identities for KYC in Europe is included in the eIDAS 2.0 standards, the European Banking Authority’s (EBA) guidelines, and the 5th AML Directive. Soon, the KYC process in the EU and other parts of the world will become much easier, less annoying, and generally better (just like the identity check in the airport).

Risk score aggregation and sharing.

Discomfort caused by KYC is partially rooted in its repetitive nature. An individual has to pass KYC checks again and again in different banks, fintech platforms, crypto exchanges, luxury goods points of sale, and so on. Even worse, the decisions vary due to different compliance policies and compliance officers' interpretations of such policies. That leads to stress, dissatisfaction with negative responses, and general customer dissatisfaction.

The industry can overcome this problem by introducing aggregated risk scores that reflect the results of many KYC checks and AML screenings. Such a score should not include full information about the client but rather a tokenized piece of data. A compliance officer will still be able to request additional documents or explanations. Still, for most people, a KYC will become a fast, easy, and error-prone procedure relevant to their previous experience.

The true potential of AI is still unknown.

Machine learning already serves as a backbone of many compliance procedures. Computer vision models recognize ID data and run liveness checks. Sophisticate algorithms analyze thousands of transactions in real time to identify unusual patterns potentially related to illicit activities. AI also enables perpetual KYC, an approach that constantly improves KYC profiles based on new knowledge about a customer.

Some people in the industry research the potential of generative AI in compliance; however, it’s limited by the very nature of such ML models. We commented on the matter in our blog: Trust, but Verify, GenAI in AML Compliance. We believe that generative AI can help identify existing software's drawbacks and tweak their behavior by creating digital identities with specified properties to train and improve ML algorithms.

The accelerated development of AI technologies brings both potential and risks. We still need to overcome various problems, including privacy and security threats, the lack of transparency and explainability of AI actions, and align the potential implementation of novel technologies with laws and regulations designed to protect humans.