Artificial Intelligence

AI’s leap forward: What risk and compliance professionals need to know

Artificial intelligence (AI) is no longer a distant frontier but rather reshaping the risk and compliance landscape in real time. In our recent webinar, Mind the gap: AI’s big leap in risk and compliance, industry leaders explored the evolving adoption of AI, its challenges, and the strategic imperatives for financial institutions. Here are the seven key takeaways:

1. AI adoption is accelerating, but mind the implementation gap

As Paul Nola, Partner at We Live Context highlighted from Moody’s 2025 global study and key findings, ‘From reactive to proactive: How AI is transforming risk and compliance’, AI adoption has surged from 9% to 24% among active users and up to 53% when including pilots. However, only 30% report significant impact, and just 34% are measuring success. This signals a clear implementation gap; organizations are excited but not yet fully equipped to extract value. “AI is moving out of innovation labs and into frontline risk and compliance functions,” said a panelist, emphasizing the urgency driven by rising regulatory demands and bad actors’ increasing use of AI.


“The 'wait and see' era for AI in compliance is officially over. Our latest global study shows adoption has surged from 30% to 53% in just two years. This isn’t just a trend—it’s a shift toward proactive, tech-enabled risk management. The real question is: is your firm part of the 53%, and are you gaining an edge?”

— Ted Datta, Senior Director, Moody’s


“Here’s a red flag: over a third of firms using AI aren’t measuring its effectiveness. Is this just a growing pain of rapid adoption—or does it explain why 46% report only moderate impact? Either way, it’s a governance blind spot we need to close.”

— Ted Datta, Senior Director, Moody’s
 

2. Use case specificity is key to success

Panelists stressed that not all AI tools are created equal. General-purpose large language models are great for summarizing emails or drafting policies, but decision-making tasks like approving transactions require more tailored approaches.

“A well-thought-out AI strategy is critical,” a panelist noted. “You need to know what tool to use where and how.” This means combining traditional machine learning with fine-tuned models and robust governance frameworks to meet regulatory standards.

3. Explainability and trust are nonnegotiable

In risk and compliance, explainability isn’t just a feature but a requirement. Panelists emphasized that without clear, measurable, and transparent AI processes, trust erodes. “If you can’t explain it, you probably shouldn’t rely on it,” a panelist said, advocating for data-driven frameworks that allow for visibility into AI decisions.

4. Data quality and infrastructure are the foundation

The panel unanimously agreed: garbage in, garbage out (GIGO). AI’s effectiveness hinges on clean, well-structured data. One panelist outlined a practical road map, starting with data mapping, cleaning, validation, and tagging, to boost readiness. “Poor data is one of the biggest reasons AI projects fail,” the panelist warned. The panel added that data infrastructure, like centralized data lakes, is essential for scaling AI adoption with speed and precision.


“What’s the number one predictor of AI success in compliance? Data quality. Firms using AI are 2.2 times more likely to report high-quality data. Yet only 27% of organizations rate their data as ‘high quality.’ That’s a massive opportunity—mastering your data strategy could be the key to leapfrogging the competition.”

— Ted Datta, Senior Director, Moody’s

5. Autonomy requires a risk-based approach

The idea of AI making autonomous decisions sparked debate. While only 5% of survey respondents support full autonomy, most favor a hybrid model with human oversight.

“Keep humans in the loop,” the panel advised, “especially for high-risk decisions [where] you need domain knowledge and technical oversight.” They urged organizations to assess the impact and visibility of errors before handing over the reins.


“With 62% of firms now encouraging the use of LLMs, we’re seeing a new tension emerge. What’s the bigger compliance risk: the rise of unmonitored ‘Shadow AI’ or an over-reliance on AI that erodes human judgment?”

— Ted Datta, Senior Director, Moody’s

6. Regulation is catching up, and that’s a good thing

From the EU AI Act to the Financial Transactions and Reports Analysis Centre of Canada, regulation is evolving. The panel opined that regulation doesn’t stifle innovation; it provides guardrails. “Lack of regulation doesn’t mean less responsibility,” one panelist said. “It just means the burden of judgment falls on the company.”

7. The future of work is already here

AI agents are becoming virtual coworkers, assisting with coding, testing, and even decision-making. “Risk professionals must evolve,” the panel concluded. “Understanding both business and technology will be critical.”


“AI isn’t about replacement—it’s about reinvention. Our data shows 96% of compliance professionals expect their roles to evolve, with most seeing a shift toward strategic and advisory functions. Is your firm investing in upskilling to unlock this potential, or are you risking your top talent being left behind?”

— Ted Datta, Senior Director, Moody’s

Final thoughts

This webinar underscored a pivotal truth: AI is not a silver bullet, but with the right strategy, it’s a powerful ally. From adoption to autonomy to explainability to infrastructure, the journey ahead demands thoughtful planning, cross-functional collaboration, and continuous learning.

 

To listen to the full webinar and dive deeper into the key findings, visit http://moodys.com/kyc/ai-study


Learn more

Leverage AI for risk and compliance

For more information on how Moody’s can support your risk and compliance processes, including automated screening that leverages AI, please get in touch – we would love to hear from you.