Regulatory News

From compliance to resilience: Regulators drive new standards for AI model risk management

The financial services sector is undergoing a structural transformation, driven by the widespread integration of advanced AI/ML models into mission-critical functions like credit underwriting and fraud detection. This dependence has exponentially magnified model risk. In response, global financial regulators are actively strengthening their oversight of model risk management (MRM). This marks a definitive shift away from traditional, implicit regulation toward explicit and comprehensive frameworks, as exemplified by recent publications from leading bodies like the Financial Stability Institute (FSI) of the Bank for International Settlements (BIS) and the Office of the Superintendent of Financial Institutions (OSFI) in Canada.

OSFI Guideline E-23: Redefining the model risk perimeter

A fundamental aspect of this regulatory evolution is the redefinition of what constitutes a "model." While traditionally defined as a formal mathematical tool for calculations (like capital or liquidity), this perimeter has significantly broadened:

  • Expanded Scope: As OSFI's finalized Guideline E-23 illustrates, a model now includes any quantitative tool or algorithm that uses data to generate an output. This explicitly covers AI-powered systems and black-box models that were previously managed under less rigorous frameworks.

  • Enterprise-Wide Mandate: The new OSFI guideline, which comes into full effect in May 2027, establishes a principles-based, risk-proportional approach. It mandates that financial institutions manage model risk across the entire enterprise, regardless of a model's complexity or its specific application.

  • Addressing AI/ML: The guideline provides specific expectations for managing the unique risks associated with AI and ML models, including explainability, fairness, and bias.

FSI Paper: Navigating the AI explainability trade-off

A key challenge the industry and regulators must confront is the "black box" nature of many advanced AI models. The recent FSI paper directly addresses the dilemma of AI explainability, acknowledging the inherent trade-off between model performance and interpretability. The paper suggests that regulators must find a balance that fosters innovation while ensuring adequate risk controls. Limited explainability severely hinders the management of core model risks, making it difficult to:

  • Identify Bias: Spot and mitigate discriminatory biases embedded in the training data, ensuring equitable outcomes.

  • Monitor for Drift: Detect "model drift," where a model's performance degrades or becomes unreliable over time due to shifts in real-world data.

  • Ensure Accountability: The paper advocates for transparent and auditable governance frameworks, even if the underlying models themselves are not fully transparent.

A global and holistic regulatory response

Beyond the challenges of black-box models, the FSI paper also highlights a number of broader systemic risks amplified by AI. These insights underscore that the regulatory response to AI must be holistic, addressing not just the technical aspects but also the broader operational, ethical, and third-party risks they introduce:

  • Systemic and Concentration Risk: The paper warns of potential concentration risk, where financial institutions become overly dependent on a small number of third-party AI service providers. This could create a single point of failure for the financial system.

  • "Human-in-Control" Frameworks: The FSI stresses the need for frameworks that ensure human oversight and intervention remain a central part of critical decision-making processes, thereby mitigating the risk of automated harm.

This global nature of this regulatory shift is evident as other jurisdictions follow suit. The Prudential Regulation Authority (PRA) in the UK has issued its own principles for effective MRM and the Reserve Bank of India (RBI) has proposed new guidelines focused on credit risk. In the U.S., the National Institute of Standards and Technology (NIST) has published voluntary frameworks for responsible AI that are widely adopted by the industry as a benchmark for best practice.

The imperative for strategic MRM transformation

The increasing global focus on Model Risk Management is not a fleeting compliance exercise but a fundamental recalibration of the financial operating model in response to accelerating technological innovation. The confluence of explicit mandates (like OSFI's E-23) and strategic guidance (from the FSI) issues a clear, unified call to action: financial institutions must urgently move beyond siloed, traditional risk processes.

The strategic imperative is to embrace a comprehensive, enterprise-wide MRM framework specifically designed for the AI era. Success will hinge not on model prohibition, but on a controlled approach that integrates regulatory guidance:

  • Mastering the Explainability Trade-Off: Firms must accept that superior AI performance may necessitate a deliberate trade-off with absolute interpretability. The FSI paper corroborates this, provided robust governance and safeguards are applied.

  • Implementing Risk-Specific Safeguards: For high-stakes applications, such as models used for regulatory capital, firms must be prepared for the imposition of restrictions, output floors, or reduced usage as a regulatory condition for deployment, as discussed by the FSI paper.

  • Ensuring External Accountability: MRM must extend beyond internal risk to encompass procedural fairness and public trust. This requires transparent, auditable governance that ensures redress channels and clear, concise explanations for AI-driven decisions impacting consumers, a vital focus of the FSI paper.

The future of financial stability hinges on banks integrating these strategic compliance measures. By transforming the regulatory challenge into a competitive differentiator, firms can strengthen the integrity and resilience of the global financial system. Effective MRM is no longer optional; it is the critical governance layer required to ensure resilience and secure the future trajectory of financial innovation.

 

Related links


LEARN MORE

Innovating with purpose

Moody’s is incorporating cutting-edge technologies, such as artificial intelligence, to help banks meet their existing challenges more effectively.