- Overview
- Financial Crime Compliance (FCC) solutions

Financial Services Solutions user guide
Evan - Adverse Media Monitoring
Evan, the Adverse Media Monitoring agent, is an essential tool used by organizations to ensure they do not unwittingly become involved in criminal activity. The analysis of adverse media enables institutions to uncover clients' intended account usage, understand the context of transactions, and potential involvement with financial crime such as money laundering, fraud, or terrorism. Despite its critical importance, traditional adverse media monitoring processing is expensive, time-consuming, prone to regulatory and audit criticism, and can be ineffective at achieving its core goal.
The agent is designed to improve both the accuracy and efficiency of an organization's efforts to identify adverse events on searched entities and related parties. It drastically reduces the time spent on data gathering, reviewing false positives, and generating comprehensive audit trails.
It enables analysts to prioritize their efforts on news results which have meaningful risk. In addition, it introduces process standardization with regards to sourcing and review of articles, clarifies decisions made on risk exposure, and creates a robust audit trail for each decision.
The agent includes pre-built integrations with Google API and other professional third-party media providers including but not limited to Dow Jones Factiva, LexisNexis, Brave Search API, and Refinitiv World-Check One. In addition to automatically extracting articles from top media providers, it enables analysts to work seamlessly with automation to handle exceptions, while capturing information to further improve the underlying models over time.
If an article cannot be dispositioned according to the pre-determined confidence thresholds, it is sent to an analyst for review. Over time, the model learns from the analysts' decisions, improving the automation rate and becoming an integral part of the compliance team.
Key components
Evan, the Adverse Media Monitoring agent, includes several core components that support your workflows:
-
Business Processes (BP):
- Core Adverse Media Monitoring: the core workflow is primarily used for ongoing monitoring of existing customers or for new customers as part of the onboarding and KYC process.
- Ad hoc Investigation: the workflow satisfies the need for ad-hoc investigations that may occur as part of a less structured and predictable process such as transaction fraud monitoring.
- File ingestion: allows users to upload a list of article URLs and entity information to be screened instead of having the agent search for and retrieve data from external providers.
- Batch processing: allows users to efficiently handle multiple investigations simultaneously and provides centralized management of batch-level reporting, notifications, and output generation.
- Quality control: allows users to continuously assess potential risks for monitored entities and provides an audit trail.
- Blocked URL management: allows users to manage URLs in the solution's ignore list.
- Data purge: allows users to periodically clean up and remove data according to their internal retention policies. Can be set up to run on a schedule or triggered on demand.
-
Connectors: pre-built connectors that allow you to source data from professional media providers via API:
- Google Search (Google API)
- Factiva Dow Jones Web Services 2 API
- Refinitiv World-Check One
- LexisNexis World Compliance, L&P Media
- Thomson Reuters CLEAR Adverse Media
- Brave Search API
Google API is available as part of the base product offering; other connectors require institutions to purchase their own license.
-
Content parser: for out-of-the-box media providers, such as Google API, a content parser is used to extract relevant article content, strip irrelevant information (such as advertisements and comments), and format it in a human-readable form.
-
Article classification ensemble model: the default model is based on thousands of labeled news articles with material risk events, as confirmed by trained compliance experts, and leading financial institutions. The final output of the model is determined by multiple sub-models.
In addition, you can leverage the NLP (Natural Language Processing) model for grouping duplicate articles.
-
Non-English article handling: the machine learning model can detect and natively handle articles in either English or Spanish. Articles fetched during a search that are in a language not native to the model will be marked for manual review. While the model will not process these articles, they will still be available in both manual task and in the final report for the user. For some data providers, you can also enable the translation feature based on Google Translate API.
-
Keyword library: a library of over 520 keywords that were tagged as part of the model training. Keywords contribute to the prioritization logic but are used as a single component. Therefore, the presence of any single keyword does not drive the risk prioritization of the articles. Keywords can be adjusted in configuration settings.
-
High-risk country identification: each institution can define a list of high-risk countries. The machine learning model will use this list to detect the presence of the high-risk countries within each sourced article and determine their respective relationships to the searched entity.
-
HITL interface: graphical UI that presents the results of the screening, including the full text of each article, to the user. It also enables exception handling, reviewing, and reconciling results.
-
Audit trail creation: rules-based automation that creates a standardized HTML or PDF format report comprising all sourced articles, article materiality metrics, confidence score for each article, the analyst's decision, and other essential entity-related data.
-
Analytics: dashboards that enable a real-time view on the agent's effectiveness and can assist in tracking the solution’s overall accuracy, identifying opportunities for improvement, and detecting issues.
-
Decision reapplication: the mechanism allows you to reuse previously made article review decisions across multiple investigations of the same entity.
-
Blocked URL Management: provides a mechanism for administrators and external systems to manage URLs in adverse media monitoring ignore list.
Model overview
The model used within Evan, the Adverse Media Monitoring agent, has the following applications:
- Predicts the score for each article's material relevancy to crime risk and whether it relates to the searched entity
- Identifies demographic information about the searched party in the article and matches it against input data
- Provides a human-readable justification message as to why each article was annotated with a particular decision
- Detects high-risk countries, including US-sanctioned jurisdictions
- Highlights risk-factor keywords for a better user-review experience
This model’s workflow consists of the following components:
- Article and input data: the model takes both the search entity (name + entity type, year of birth, location) and article information (title, content, publication date + additional metadata).
- Adverse model: predicts if an article has a negative or a non-negative sentiment without considering the searched entity.
- Name matcher: identifies if the searched entity was mentioned in the article using a Named Entity Recognition (NER) model, and extracts entity type, age, and location.
- Focal model: detects if the search entity mentioned in the article is the focal entity of the article. The focal entity may be the suspect of a crime, the recipient of an enforcement action, or the perpetrator of a fraud event. Other entities may also appear in the article and need to be identified as non-focal entities, for example, journalists giving a quote, a judge passing a sentence, or a witness of a crime.
- Age matcher: compares either the date of birth or year of birth of the searched entity with a potentially matched entity. The article publication date is also taken into consideration to calculate the date of birth.
- Decision: makes a final decision on whether the article should be discounted as a false positive or not based on the outcomes of the underlying models. It includes a confidence score and a human-readable justification message. False positive reasons include:
- Article is not adverse
- The searched entity is not the focal entity
- The searched entity was not identified within the article
- Entity types do not match
- Age mismatch between the searched entity and focal entity
Additionally, Evan, the Adverse Media Monitoring agent, also supports integration with different LLM providers to enhance model performance, including OpenAI, Mistral, Google, or Anthropic.