Artificial Intelligence

Human in the loop: Why human oversight still matters in AI-driven risk and compliance

As artificial intelligence (AI) continues to transform risk and compliance functions in banking and fintech, the debate around “human in the loop” is more relevant than ever. Moody’s latest global study of 600 risk and compliance professionals reveals that although AI adoption is accelerating, human oversight remains a cornerstone of trust and accountability.

The state of AI adoption

  • About 91% of respondents stated they are aware of AI’s role in risk and compliance, and 53% are actively using or trialing it, a sharp rise from 30% in 2023. 
  • Adoption is highest in fintech, asset and wealth management, and professional services; government and corporates are more cautious. 
  • Larger companies — particularly those in North America, Europe, and Asia-Pacific — are leading adoption, but regulatory uncertainty and integration challenges remain.


Human in the loop: The industry perspective

Despite enthusiasm for AI, most respondents believe human oversight is essential. The report finds that:

  • Approximately 84% agree AI offers significant advantages, but only 30% see those benefits clearly in practice. 
  • Concerns include overreliance on AI, data privacy, errors, and lack of transparency. 
  • Safeguards such as training, governance frameworks including strong quality control programs, and other internal tools are widely implemented to mitigate risks. 


“Ultimately, it is the human beings who must be accountable. You can’t outsource accountability. That’s a principle in regulation that will always stay, so I think human involvement has to be mandatory.”
— Head of Compliance, Professional Services in Europe, the Middle East, and Africa


“There needs to be a human component because while AI is great, sometimes nothing can beat good old common sense and intuition.”
— Chief Financial Officer, Corporates in North America


While most respondents prefer human oversight in AI-driven risk and compliance, a small group, about 5% of survey respondents indicated they are comfortable with fully autonomous AI systems that operate without human involvement. These respondents are found across several sectors. Banking and professional services have the highest number of respondents in this small group, with fintech and asset and wealth management also represented.

Why human in the loop remains the expectation

The prevailing view, which is shared by 42% of survey respondents, is that human oversight is mandatory, not optional. And this model is evolving. We’re seeing a bifurcation: Humans handle high-risk, complex tasks while AI bots take on low-risk, repetitive work. Oversight is shifting from operational decision-making to quality assurance and quality control, much like how banks onboard new analysts with tiered autonomy.

Regulatory perspectives: What’s next?

Regulators are watching closely. The transition to more automation raises questions about staffing, governance, and risk management. Large institutions are already leveraging regulators' encouragement to innovate by experimenting with AI-driven compliance. The future may see AI agents treated like new analysts, subject to rigorous review before gaining autonomy. Banks and fintechs should balance innovation with compliance, learning from both the majority and the 5% who are pushing boundaries. The key is maintaining robust human oversight, especially as AI takes on a greater share of low-risk work.

Scenarios: with and without human in the loop

To illustrate the stakes, consider two scenarios:

  • Scenario 1: Human in the loop
    AI aggregates data and flags potential issues, but a compliance professional reviews and signs off. This approach minimizes risk and satisfies regulatory expectations. 
  • Scenario 2: Human out of the loop
    AI makes decisions autonomously. Although efficient, this model could expose organizations to regulatory scrutiny and operational risk if not carefully managed. 
  • Scenario 3: Human on the loop
    AI makes decisions but with the oversight of compliance professionals carefully monitoring and evaluating the results to ensure that the outcomes have deployed appropriate judgement, and the decisions are in line with the organization's risk tolerance. 


Conclusion

Human oversight in AI-driven compliance isn’t just a nice-to-have; it’s a strategic advantage. Although AI is rapidly transforming risk and compliance, human oversight continues to play a vital role. As adoption grows, the industry is learning to balance innovation with accountability, so technology augments rather than replaces human judgment.

To dive deeper into the key findings from our global survey, visit http://moodys.com/kyc/ai-study


Learn more

Leverage AI for risk and compliance

For more information on how Moody’s can support your risk and compliance processes, including automated screening that leverages AI, please get in touch – we would love to hear from you.