top of page

Enhancing Adverse Media Screening with Generative AI: Key Research Findings

In their 2024 research report, Generative AI in Risk and Compliance: Friend or Foe?, Parker & Lawrence examine the impacts and risks of emerging AI capabilities across 14 RegTech use cases, with insights from 17 expert contributors.


Among those use cases is Adverse Media Screening:


Adverse Media Screening Definition

Adverse media screening solutions scrutinise a wide array of media sources, including news publications and online databases, to identify and report negative information that might indicate criminal activities or reputational risks. They allow organisations to continuously monitor entities throughout their business relationship, ensuring compliance with regulatory requirements and mitigating potential exposure to financial crimes.


Key Generative AI Use Cases in Adverse Media Screening

Expert contributors helped map the opportunities, risks and imminence of 400+ Generative AI applications. The mapping of adverse media screening opportunities shows 5 impactful Generative AI capabilities:


  • Labelling Data: The application of generative AI to automatically assign meaningful labels or tags to data, potentially unstructured or semi-structured, enhancing data organisation and usability for machine learning models.


  • Interpreting & Summarising Information: Utilising Generative AI to digest and condense large volumes of data into concise summaries, providing insights and overviews that aid in decision-making and comprehension.


  • Creating Reports: The use of generative AI to create detailed, structured reports based on input data. This includes comprehending the potentially formal and official nature of regulatory reporting.


  • Editing Reports: Generative AI’s ability to identify errors, refine, correct and otherwise improve reports. This usually involves mapping reports to the policies or regulations which necessitate them, and the input data which inform them, for maximum context.


  • Internal Co-pilot: Leveraging Generative AI to assist in investigative tasks by generating hypotheses, suggesting lines of inquiry, and synthesising findings from diverse data sources to support human investigators.

Key Use Case: Sourcing & Processing Adverse Media LLMs are already capable of interpreting the relevance of public information to a particular individual or entity. Firms can fine-tune LLMs on the data deemed useful and not useful in their historical investigations to increase their accuracy further. They can also feed the model their suspicious flags and final decisions in order to automate a first pass at these tasks too.


Want to dive deeper? Check out the full report for expert commentary:



Key Generative AI Risks Mitigated through Adverse Media Screening

The analysis also accounts for the emerging risks which RegTech solutions will need to solve. The mapping of Adverse Media Screening finds that it can contribute towards solving 2 risks:


  • Bias: The inadvertent reinforcement or creation of prejudiced outcomes by Generative AI. This encompasses any form of partiality, discrimination, or unfair weighting in the information or decisions generated by AI.

  • Personal Attacks: The use of Generative AI to harass, defame, or otherwise harm individuals. This includes the creation of compromising or harmful content designed to intimidate, blackmail, or infringe upon their privacy.

Key Risk: Coordinated Disinformation If bad actors use Generative AI to coordinate unfair attacks against individuals or firms, spreading mis- or disinformation, adverse media and investigations products can help to verify the information against multiple other reliable sources - raising doubts over the truthfulness of the data.


The Industry's Perspective

Throughout the research, experts from technology vendors, regulated institutions and regulators share new insights on Generative AI in risk and compliance. Thank you to FINTRAIL's Maya Braine for contributing an industry perspective on Generative AI in adverse media screening.


To keep this blog bitesize, here are just some of Maya's insights:


Quality Assurance Through Generative AI

One promising application of Generative AI involves its use in quality assurance within financial compliance frameworks. By reviewing analysts’ decisions and processes, benchmarking them against the firms’ policies, controls and regulatory obligations, Generative AI can pinpoint potential errors or deviations more effectively than traditional random sample testing. This method promises a more targeted approach to ensuring compliance and accuracy in adverse media screening and beyond.


Balancing AI Adoption

Broadly, financial institutions are leaning towards conventional AI methods due to the perceived risks associated with Generative AI. The uncertainty surrounding Generative AI has stalled its widespread adoption in critical compliance operations. However, the growing use of Generative AI by criminals, especially in committing sophisticated frauds like deep fakes and first-party fraud, is a pressing concern that is pushing firms to better understand the technology and to devise strategies to counteract AI-facilitated crimes effectively.


Tookitaki logo

FINTRAIL is a global consultancy specialising in financial crime risk management. They provide advisory, assurance and training solutions to enhance anti-financial crime controls and ensure compliance, serving clients worldwide, including banks, FinTechs, and RegTech companies.


About Parker & Lawrence

At Parker & Lawrence, we are passionate researchers with extensive experience in regulation, risk, and technology. We combine deep regulatory expertise with strong business and technology backgrounds, enabling us to provide holistic, informed perspectives. Specialised in RiskTech and RegTech, we form dynamic partnerships with clients to elevate their marketing strategies.


Ready to transform your thought leadership approach? Let’s connect and explore how we can drive success together.



bottom of page