Regulatory News

FSB report assesses final stability implications of artificial intelligence

The Financial Stability Board (FSB) published a report that assesses the financial stability implications of artificial intelligence (AI). The report on AI calls for financial authorities to enhance monitoring of AI developments, assess whether financial policy frameworks are adequate, and enhance their regulatory and supervisory capabilities including by using AI-powered tools.

This report outlines recent developments in the adoption of AI in finance and their potential implications for financial stability. While AI offers benefits like improved operational efficiency, regulatory compliance, personalized financial products, and advanced data analytics, the report notes that several AI-related vulnerabilities stand out for their potential to increase systemic risk, including:

  • Third-party dependencies and service provider concentration – The reliance on specialized hardware, cloud services, and pre-trained models has increased the potential for AI-related third-party dependencies. The market for these products and services is also highly concentrated, which could expose financial institutions to operational vulnerabilities and systemic risk from disruptions affecting the key service providers.

  • Market correlations – The widespread use of common AI models and data sources could lead to increased correlations in trading, lending, and pricing. This could amplify market stress, exacerbate liquidity crunches, and increase asset price vulnerabilities. AI-driven market correlations could be exacerbated by increasing automation in financial markets.

  • Cyber risk – AI uptake by malicious actors could increase the frequency and impact of cyber-attacks. Intense data usage, novel modes of interacting with AI services, and greater usage of specialized service providers increase the number of cyber-attack opportunities.

  • Model risk, data quality, and governance – The complexity and limited explainability of some AI methods and the difficulty of assessing data quality for widely used AI models could increase model risk for financial institutions that lack robust AI governance. The use of opaque training data sources for these models also complicates data quality assessments. Understanding the quality and accuracy of model outputs is complicated by new inaccuracies, such as hallucinations.

  • Generative AI-related financial fraud – Generative AI could increase financial fraud and the ability of malicious actors to generate and spread disinformation in financial markets. Misaligned AI systems that are not calibrated to operate within legal, regulatory, and ethical boundaries can also engage in behavior that harms financial stability.

While existing financial policy frameworks address many of the vulnerabilities associated with use of AI by financial institutions, the report notes that more work may be needed to ensure that these frameworks are sufficiently comprehensive. To this end, the FSB, standard-setting bodies (SSBs), and national authorities may wish to:

  • consider ways to address data and information gaps in monitoring developments in AI use in the financial system and assessing their financial stability implications.

  • assess whether current regulatory and supervisory frameworks adequately address the vulnerabilities identified in this report, both domestically and internationally.

  • consider ways to enhance regulatory and supervisory capabilities for overseeing policy frameworks related to the application of AI in finance, for instance, through international and cross-sectoral cooperation and sharing of information and good practices.

  • consider leveraging AI-powered tools to enhance their supervisory and regulatory capabilities via supervisory and regulatory technologies (SupTech and RegTech).

 

Related links


LEARN MORE

Find out how we can help

Moody’s brings together data, experience, and best practice capabilities, with our specialized and agile intelligence.