Artificial Intelligence

Navigating the shift: How agentic AI is reshaping risk and compliance

The risk and compliance landscape is evolving rapidly, and agentic AI is emerging as a transformative force. Moody’s latest research, based on a survey of 600 global risk and compliance professionals, reveals how agentic AI is being adopted, the challenges organizations face, and what leaders should prioritize next.

Agentic AI: Awareness is growing, and adoption is accelerating 

About 40% of risk and compliance professionals are now aware of agentic AI in their field, and 26% are actively using agentic AI models — AI agents that search for information, make recommendations, or trigger actions across systems. While agentic AI adoption lags behind generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), its growth trajectory is steep, especially as organizations seek more automation and augmentation in their workflows.

The first three [AI types] are self-explanatory, but the last one [agentic AI] is a bit new, and I don't think a lot of organizations, especially within financial services, really use it at this point.

— Risk and Governance Director, Banking in Europe, the Middle East, and Africa 
 

How is agentic AI being used?

Agentic AI is being strategically applied across several key use cases within risk and compliance functions, reflecting the dynamic needs of modern organizations. Automation of manual or repetitive processes accounts for 34%, demonstrating a clear priority toward streamlining routine operations and freeing up human resources for higher-value tasks. An equal percentage of respondents identified the adoption of both automation and augmentation, signifying a trend in which agentic AI not only performs tasks independently but also works alongside professionals to improve overall efficiency and effectiveness. Furthermore, 32% of organizations focus on leveraging agentic AI specifically to enhance human decision-making, utilizing the technology’s analytical capabilities to provide insights and recommendations that inform and support complex judgments. These patterns indicate that although automation remains a primary entry point, there is growing recognition of agentic AI’s potential to augment expertise and drive smarter, more collaborative outcomes in risk management and compliance. 

It’s typical to start with the business case to automate something and then quickly realize we need to also bring in other information and human decision-making, so we quickly move to a mix.

— Director, Professional Services in Europe, the Middle East, and Africa 
 

Human oversight remains essential

Despite the rapid advancement and increasing awareness of agentic AI, only 4.5% of organizations trust AI to act fully autonomously. Instead, 47% of organizations require AI systems to make recommendations but reserve final decision-making authority for human professionals, preserving accountability and maintaining regulatory compliance. Furthermore, 27% of organizations permit a degree of AI autonomy, but this is coupled with rigorous audits and continuous monitoring controls to mitigate potential risks. These figures underscore a prevailing caution in the industry in which human oversight remains a cornerstone of risk management practices. Consequently, organizations prioritize a balanced approach that leverages agentic AI’s efficiency and analytical power while maintaining strict governance to safeguard against unintended outcomes and uphold ethical standards. 

Ultimately it is the human beings that have to be accountable... You can't outsource accountability. That's a principle in regulation that will always stay, so I think human involvement has to be mandatory.”

— Head of Compliance, Professional Services in Europe, the Middle East, and Africa 
 

Challenges and concerns: Overreliance, transparency, and data privacy

Overreliance on agentic AI in compliance can erode human expertise, weaken critical thinking, and increase vulnerability to errors. Data privacy, sovereignty, and confidentiality remain key concerns, emphasizing strong data governance as AI handles sensitive information. Mistakes from AI decisions can create regulatory and reputational risks if unchecked; AI “hallucinations” and a lack of transparency further threaten compliance, making validation and oversight crucial. These challenges demand ongoing safeguards and monitoring to maintain ethical and regulatory standards when using AI. 

The biggest risk that I see is that we're building a black box and that’s where we lose control... The risk is that we build this black box, and nobody understands what we put in and what comes out.

— Director, Professional Services in Europe, the Middle East, and Africa 
 

Regulatory priorities: Privacy, accountability, and transparency

Regulatory priorities for agentic AI focus on data privacy and protection, with 20% of survey respondents highlighting strong safeguards for sensitive information as their single most important concern. Defining accountability is also critical since 16% of respondents value clear frameworks for assigning legal responsibility as their top priority, as AI becomes more autonomous. Transparency and explainability were the primary concerns for 13%, promoting compliance and trust. Additionally, 8% stress the need for formal governance and approval processes, especially for high-risk uses, as their key concern. Together, these priorities support responsible AI innovation while managing compliance risks. 

I think the most important part is promoting transparency because that will mitigate the risk of the black box scenario, and that's the key risk.

—Director, Professional Services in Europe, the Middle East, and Africa 
 

Accelerating adoption demands responsible leadership 

Agentic AI is not yet mainstream, but its adoption is accelerating, especially in sectors seeking operational efficiency and smarter decision-making. Human oversight is nonnegotiable. Senior leaders must build accountability and auditability into every AI deployment and invest in staff training, robust governance frameworks, and regular audits. Regulation, training, and transparency are critical for safe usage, and widespread adoption is expected within one-to-three years. Organizations that prepare now will be best positioned to leverage agentic AI for competitive advantage.

Agentic AI offers significant opportunities for risk and compliance but only if organizations address the challenges of oversight, transparency, and regulation. Senior leaders must champion responsible adoption, keeping human expertise at the core of decision-making.

 

Get in touch 

For more information about the Moody’s study into AI and risk-related compliance, visit our website, or get in touch with the team at any time. 


Learn more

Leverage AI for risk and compliance

For more information on how Moody’s can support your risk and compliance processes, including automated screening that leverages AI, please get in touch – we would love to hear from you.