“Many banks I speak with understand regulators will be suspicious of claims that compliance programs can replace humans with AI without compromising outcomes. But compliance programs see the potential benefits and want to incorporate this new technology. To do so, institutions often emphasize that they are not looking to use AI to replace the human workforce and reduce headcounts. They highlight that using AI will enhance investigators’ ability to more efficiently and effectively do their jobs in support of investigative outcomes. Financial institutions may also identify that these tools can support their firm to be more risk-based as it will mean compliance investigators can spend more time on meaningful work items rather than check-the-box compliance efforts,” says Alex Feldman, Industry Practice Lead, Moody’s.
This reflects the position that many compliance officers across financial institutions have communicated to their regulators regarding whether and how artificial intelligence may be incorporated into compliance programs.
Compliance functions have traditionally been treated as cost centers, and Chief Compliance Officers (CCOs) are often required to manage expanding regulatory expectations within constrained resources.
Against this backdrop, when regulators examine an institution, they may assess statements about the intended use of AI against the institution’s actual use. Therefore, firms may wish to proceed with caution, particularly where institutions state that AI is intended to support compliance activities rather than replace human oversight.
In the absence of technology-specific AML requirements, supervisory expectations are typically interpreted through existing frameworks governing suspicious activity monitoring and reporting. Laws and regulations that require institutions to monitor for and report suspicious activity generally do not prescribe how new technologies, including AI, should be adopted in compliance programs. Guidance from FinCEN, the federal prudential regulators, and state regulators is similarly limited in this respect.
As a result, institutions often look to the Federal Financial Institutions Examination Council (FFIEC) Bank Secrecy Act (BSA)/Anti-Money Laundering (AML) Examination Manual as a supervisory reference point for BSA compliance, addressing resourcing primarily in the context of the BSA Compliance Officer pillar. In doing so, it focuses on the availability of appropriate resources rather than specific technologies or headcounts.
Specifically, the Manual states that the BSA compliance officer should have access to suitable resources, which may include adequate staffing with the skills and expertise commensurate with the institution’s risk profile, size, complexity, and organizational structure, as well as systems that support the timely identification, measurement, monitoring, reporting, and management of illicit financial activity risks.1
1https://bsaaml.ffiec.gov/manual/AssessingTheBSAAMLComplianceProgram/04
In practice, compliance functions may face competing pressures. Senior management and shareholders may encourage the use of technology, including AI, as part of broader efforts to manage costs and allocate resources efficiently. At the same time, supervisory expectations continue to emphasize the importance of human judgment and accountability within compliance programs, particularly for decision-making activities.
Regulators, in turn, operate within similar constraints. Supervisory authorities have acknowledged that financial crime risks continue to evolve as bad actors adopt new technologies and methods, which can require institutions to update and adapt their control frameworks. At the same time, supervisory experience has shown that compliance deficiencies can arise where new tools or approaches are adopted without a clear understanding of their limitations or appropriate governance.
As a result, regulatory examiners are often required to balance these considerations when assessing compliance programs, including the use of emerging technologies, in circumstances where formal policy guidance may still be developing.
What follows outlines considerations that institutions may evaluate when assessing the potential use of AI within regulated compliance environments.
1. Research and retrieval models and LLMs
Some institutions are already using research- and retrieval-based approaches (often referred to as Retrieval Augmented Generation or RAG) and Large Language Models (LLMs) within elements of their compliance programs. These technologies are generally applied to support specific tasks, like information retrieval and documentation, rather than decision-making.
RAG-based models are designed to retrieve relevant information from a defined and controlled dataset and present that information to the user in response to a query. Their function is limited to sourcing and summarizing existing content, which can be applied to activities such as case research or information review within investigative workflows.
LLMs are a form of generative technology that can produce written text based on patterns learned during training. In compliance contexts, they are sometimes used to assist with drafting or structuring written narratives related to investigations or reviews. Their use may typically involve supporting consistency in language, formatting, and documentation, while remaining subject to review and oversight by compliance teams.
2. Evolving GenAI adoption
Where institutions consider expanding the use of generative AI beyond narrowly scoped retrieval or drafting support, adoption is often approached incrementally. In compliance contexts, this might involve limiting the role of generative models to defined use cases and maintaining boundaries around where automation is applied.
A feature of these approaches might also include continued involvement of compliance teams in reviewing, validating, and approving outputs generated by AI systems. Human oversight is generally viewed as central to investigative conclusions, escalation decisions, and regulatory reporting, with generative tools used to support, rather than substitute professional judgment.
This emphasis on a “human in the loop” model is consistent with supervisory expectations around accountability, governance, and explainability within compliance programs.
3. Starting with less critical investigative work
Where AI tools are introduced into investigative workflows, institutions may limit initial use to tasks that are lower risk and more administrative in nature. These activities could sit at the periphery of investigative processes; perhaps they involve fewer determinations related to suspicion, escalation, or regulatory reporting.
Examples of such work could include organizing information, summarizing documents, formatting case files, or the consolidation of data already reviewed by investigators. By applying AI in these contexts, institutions may gain insight into how the technology performs within existing controls without affecting core investigative judgments.
4. Moving compliance personnel to higher risk areas
If certain lower risk or administrative tasks are supported by technology, institutions may reassess how compliance resources are reallocated across their programs. In some cases, this can result in greater focus by compliance personnel on activities or risk areas that require judgment, contextual assessment, or escalation decisions.
This shift need not imply a reduction in accountability or oversight. It reflects an effort to align human ability and insight with areas of higher complexity, uncertainty, or regulatory sensitivity, where professional judgment remains central. Investigative conclusions, determinations of suspicion, and decisions related to regulatory reporting generally continue to rest with designated compliance roles.
5. Demonstrating success through results
As institutions assess the use of AI within compliance programs, attention may turn to how outcomes are evaluated and communicated. Rather than focusing on the technology itself, assessments might consider whether the use of AI aligns with existing control objectives, governance frameworks, and supervisory expectations.
In practice, this could involve documenting how “AI supported” activities operate within processes, how outputs are reviewed and validated by compliance teams, and how responsibilities and accountability are maintained. Observations from internal reviews, audits, or supervisory interactions could also help inform how institutions understand the impact of these tools over time.
6. Managing expectations early
Stakeholders including shareholders and senior management might push to aggressively adopt AI as part of a broader cost reduction strategy. In these initial stages, there are often assumptions about how AI will help compliance programs do more with less, potentially leading to headcount reductions.
While these expectations may prove true over time, compliance programs wishing to avoid unnecessary supervisory scrutiny understand that change management on this scale needs to be conducted with regulatory expectations in mind. Supervisory assessments may consider whether compliance programs are moving in a controlled and well‑governed manner and demonstrate sustainable results before institutions consider changes to staffing levels. It is incumbent on CCOs and BSA Officers to be involved in these technology conversations early to be able to soften expectations related to headcount reduction.
As financial institutions explore use of AI in their compliance programs, it’s done against a backdrop of constrained resources, evolving financial crime risks, and limited technology specific regulatory guidance.
While AI is often positioned as a means of supporting efficiency, supervisory expectations continue to emphasize human judgment, accountability, and governance, particularly in areas related to suspicious activity monitoring and reporting.
In the absence of prescribed rules on AI adoption, institutions might interpret expectations through existing BSA/AML frameworks, including the FFIEC Examination Manual, which focuses on the adequacy of resourcing and oversight rather than specific technologies. With this context in mind, institutions might consider approaching AI adoption incrementally, applying it first to specifically scoped, lower risk tasks, maintaining human review of outputs, and preserving responsibility for investigative conclusions and regulatory reporting.
This approach reflects an effort to integrate AI into compliance programs in a way that aligns with established supervisory principles, supporting existing control frameworks, while empowering institutions to evaluate the role of emerging technologies based on observable outcomes.
For more information on how Moody’s solutions for compliance help organizations leverage emerging technologies within existing AML and supervisory frameworks, please get in touch with the team. We would love to hear from you.
*Disclaimer: This content is for informational purposes only and does not constitute legal, financial, compliance or other professional advice. Please consult with a qualified professional for specific legal, financial, compliance, or other professional advice. For more terms and conditions pertaining to Moody’s products and services, refer to the disclaimer on Moody’s website.