Generative AI and Compliance

Generate AI and Compliance.jpg

Trust, but Verify: GenAI in AML Compliance

Generative artificial intelligence (GenAI) is the most significant tech hype nowadays. Large language models became standard productivity tools, and new startups promise to revolutionize industries with GenAI. However, most existing GenAI-based products help in simple everyday tasks rather than redefining how people work. Can GenAI become a life-changer instead of a time-saver in a complex workflow, such as anti-money laundering (AML) compliance? Let us explore further.

Old-School Machine Learning vs. GenAI

Machine learning, the application of artificial intelligence (AI) that allows machines to learn and improve, gained traction in the 1990s and became mainstream in the 2010s. Machine learning instruments work under the hood of popular websites, operating systems, business applications, etc. These applications are valuable due to their ability to make accountable predictions (predictive analytics).

GenAI is a particular implementation of machine learning that aims to generate new content (text, images, video and so on) based on its training dataset. The GenAI boom of the recent decade was manifested by the launch of numerous consumer AI applications, as opposed to industrial, such as chatbots or image generators, which facilitated the adoption of AI tools as part of daily routines and workflows.

The GenAI Hype

The hype surrounding GenAI exploded in the late 2010s, bringing substantial investor interest to the industry and fueling the growth of numerous startups, creating their own AI or building specialized solutions on top of existing ones. Financial institutions (FIs) have long employed various AI/machine learning instruments to offer digital-first onboarding, run adverse media screening of their customers, improve false positive detection in transaction monitoring and identify unusual patterns in user activity. Does GenAI have unique potential for changing a current regulatory technology stack, or can it make things worse by empowering cybercriminals and fraudsters?

The Challenges

The ability to create plausible-looking content makes GenAI shine in the hands of cybercriminals, who use it to bypass know your customer (KYC) procedures in banks, financial technology and crypto apps with fake identities. GenAI applications create fake IDs, proofs of address and other documents, generate photos and even pass the liveness check with deepfakes.

A recent example of a potentially mean GenAI was the network behind the currently inoperative OnlyFake website, which was supposedly able to create realistic-looking IDs. People claimed to be able to pass through document verification at PayPal and cryptocurrency exchanges.

Later, GenAI applications could also be tasked to create realistic-looking invoices and records for money laundering purposes. Technologies enabled malevolent actors to single-handedly manage multiple synthetic users’ identities and accounts. Over recent years, numerous tech enthusiasts and security experts have pointed out vulnerabilities in existing KYC procedures; however, no large-scale industry research has been published.

The Opportunities

AML compliance is standardized and directed by legal requirements. A properly trained conversational assistant can help compliance officers with everyday duties and provide a fast search across legal documents and cases. There is little space for AI “creativity,” but GenAI can solve one of the long-standing issues associated with machine learning, the

AI bias.

Bias is the fundamental problem of machine learning. Bias may originate from the training data that overrepresents or underrepresents specific groups based on race, gender or other characteristics. Bias can be rooted in the very design of a machine learning model, reproduced through self-training and amplified over time. In this case, GenAI can be tasked to generate vast amounts of synthetic data, such as customer profiles with certain qualities, which will be further used to train other machine learning models, tweak their performance and reach better results.

###The Considerations for GenAI

There are several matters to consider when discussing the implementation of AI in financial compliance. The matter of AI accountability became important with the adoption of machine learning models in business. A quote attributed to a 1970s IBM presentation states: “A computer can never be held accountable; therefore, a computer must never make a management decision.” However, when it comes to GenAI applications, such as chatbots, people use and overuse them in decision-making and even brag about it on X and other social media platforms. In compliance, AI-induced mistakes may come with a high price for both the company and the customer, which means using AI recommendations should be limited.

FIs must be able to explain their decisions to internal and external parties, including regulatory bodies. However, the complexity of modern AI models results in a situation when the input signals might not be easily identifiable or even known. This problem is even worse for generative AIs, which often trade explainability for flexibility and are even more prone to AI hallucination.

Another matter is data privacy, which is even more crucial for companies in the regulated financial market. Machine learning models are prone to data leakages from the training datasets, the capacity to unmask anonymized data through interferences, and “remembering” information about individuals in the training datasets even after the data is discarded. That is a known issue, and some generative AI developers explicitly state that they cannot ensure the security and confidentiality of the information that users provide.

Conclusion

For decades, machine learning revolutionized business in the financial services and regulatory technology markets. One should not be mistaken by the hype surrounding GenAI: Machine learning models already optimize dozens of crucial processes in AML compliance workflow.

GenAI became a massive problem for AML compliance, as cybercriminals employed the technology to create synthetic identities of nonexistent people with forged documents and deepfake-powered faces. GenAI has the potential for a positive change in the industry; however, it is not as apparent as its ability to cause chaos for compliance officers.