In their 2024 research report, Generative AI in Risk and Compliance: Friend or Foe?, Parker & Lawrence examine the impacts and risks of emerging AI capabilities across 14 RegTech use cases, with insights from 17 expert contributors.
Among those use cases is transaction monitoring:
Transaction Monitoring Definition
Transaction Monitoring tools periodically or continuously monitor financial transactions to identify any patterns that indicate suspicious activity and the potential of financial crime. These solutions are often trained on large amounts of historical data in order to learn the most common patterns, but most are also configurable to account for the client’s specific risks. It is increasingly popular to offer sandbox environments within the platform to allow firms to test their configuration on their own datasets.
Key Generative AI Use Cases in Transaction Monitoring
Expert contributors helped map the opportunities, risks and imminence of 400+ Generative AI applications. The mapping of transaction monitoring opportunities shows 5 impactful Generative AI capabilities:
Creating New Data: The process of using Generative AI to produce novel datasets that mimic real-world data structures and patterns, particularly for the purposes of training algorithms or testing system performance.
Interpreting & Summarising Information: Utilising Generative AI to digest and condense large volumes of data into concise summaries, providing insights and overviews that aid in decision-making and comprehension.
Comparing Documents: The capacity of Generative AI to analyse and highlight differences or similarities between various documents, aiding in tasks such as version control, plagiarism detection, and legal document analysis.
Creating Reports: The use of Generative AI to create detailed, structured reports based on input data. This includes comprehending the potentially formal and official nature of regulatory reporting.
Internal Co-pilot: Leveraging Generative AI to assist in investigative tasks by generating hypotheses, suggesting lines of inquiry, and synthesising findings from diverse data sources to support human investigators.
Of which 2 have already seen Generative AI implemented or will do imminently.
Impact vs Readiness
Want to dive deeper? Check out the full report for expert commentary:
Key Generative AI Risks Mitigated through Transaction Monitoring
The analysis also accounts for the emerging risks which RegTech solutions will need to solve. The mapping of transaction monitoring finds that it can contribute towards solving 3 risks:
Bias: The inadvertent reinforcement or creation of prejudiced outcomes by Generative AI. This encompasses any form of partiality, discrimination, or unfair weighting in the information or decisions generated by AI.
Personal Attacks: The use of Generative AI to harass, defame, or otherwise harm individuals. This includes the creation of compromising or harmful content designed to intimidate, blackmail, or infringe upon their privacy.
Malfunctioning Systems: The risk that complex Generative AI systems may behave unpredictably or fail to perform as intended, without detection, due to their opaque nature. This also includes the knock-on effects on surrounding systems.
However none of these are imminent concerns.
Severity vs Imminence
The Industry's Perspective
Throughout the research, experts from technology vendors, regulated institutions and regulators share new insights on Generative AI in risk and compliance. Thank you to Tookitaki for contributing an industry perspective on Generative AI in transaction monitoring.
To keep this blog bitesize, here is just one excerpt inspired by Tookitaki's insights:
Powering AI with Industry-Wide Learnings
Tookitaki deploys a federated AI model, a collaborative approach to training machine learning (ML) models based on the real-world crime scenarios from their Anti Financial Crime (AFC) Ecosystem:
Learning from Individual Cases: Instead of sharing personally identifiable information (PII), Tookitaki's system extracts patterns and learning points from individual client cases.
Central Model Training: These learnings are used to train a central AI model that benefits all clients in the network, akin to a shared intelligence system.
Cross-Client Benefit: As the central AI model learns from one client's data, it applies the insights gained to better protect all other clients in the network.
The AFC Ecosystem consists of expert organisations including regulated institutions and regulators, who contribute to a Typology Repository - a comprehensive collection of AML typologies, with associated risk scores and specific case studies. There is massive potential for GenAI to squeeze even more out of this repository. Among other possibilities, Generative AI could be deployed to derive patterns and insights from specific case studies, mapping them to typologies and updating the associated risk indicators for better detection. These applications may also result in the identification of entirely new typologies, for which Tookitaki can build detection models.
Tookitaki is a Singapore-based company specialising in financial crime prevention, offering innovative solutions for anti-money laundering (AML) and fraud detection. The company’s flagship AML suite, FinCense, tackles both fraud and AML risks through a unified platform.
About Parker & Lawrence
At Parker & Lawrence, we are passionate researchers with extensive experience in regulation, risk, and technology. Our unique blend of market analysis and collaboration with regulators and RegTech vendors positions us at the forefront of industry insights. We combine deep regulatory expertise with strong business and technology backgrounds, enabling us to provide holistic, informed perspectives. We specialise in RiskTech and RegTech, forming dynamic partnerships with clients to elevate their marketing strategies.
Ready to transform your thought leadership approach? Let’s connect and explore how we can drive success together.