Banking

The foundation of risk management

Obtaining and storing data is crucial, but it’s only the first step in building an effective data management framework. To facilitate effective risk decisions, data must be turned into the right information and delivered to the right people in an understandable format. This article focuses on developing an effective data management framework for the analytical data used for regulatory and business reporting.

Introduction

As banks face increasing regulatory scrutiny, risk managers and senior bankers must adapt their current practices, particularly when it comes to data and risk management. These new practices range from implementing digital and mobile banking to dealing with low investment yields and capital strengthening. Systematically adapting these practices proves onerous, particularly with so many evolving regulations:

  • Basel III Endgame.
  • Financial Reporting (FINREP).
  • Common Reporting Standards (COREP).
  • International Financial Reporting Standards (IFRS) 7 and 9.
  • Dodd-Frank.
  • Comprehensive Capital Analysis and Review (CCAR).
  • The European Central Bank’s (ECB) Asset Quality Review (AQR).
  • Stress tests.
     

Table 1 provides an overview of the regulations and their associated requirements.

One core element woven through these regulations is data. In particular, regulators seek greater emphasis on data quality, accuracy, granularity, full audibility (and lineage) of data utilized, and going forward, a central analytical data store.

Data is not only needed in greater volumes but also requires much greater levels of granularity than ever before. In addition to regulatory requirements for better data transparency, the business side demands more information that relies on data. Efficiently managing data in banks is such a problem that one survey estimated that it can take up to 7-10% of a bank’s operating income.1

Data became particularly relevant in the requirements established by the Basel Committee on Banking Supervision (BCBS) and published in 2013 (BCBS 239) in the document, Principles for Effective Risk Data Aggregation and Risk Reporting.2 These requirements consist of 14 principles, ranging from governance and accuracy to IT structure and delivery. The BCBS principles are designed to help improve a bank’s ability to identify and manage bank-wide risks. Of the 14 principles, four sections in particular relate to data:

  • Governance and Architecture.
  • Risk Data Aggregation.
  • Risk Reporting.
  • Supervisory Review.
     

While these principles apply only to globally systematically important banks (G-SIBs) with an implementation date of January 1, 2016, national supervisors may apply the principles to domestically systematically important banks (D-SIBs) as well.

In addition to the BCBS requirements, the Financial Stability Board (FSB) Data Gaps initiative also requests that systemically important financial institutions (SIFIs) report additional data (large exposures, liquidity, and other balance sheet data). Regardless of the regulatory requirements, these principles make sound guidance for all banks.
 

Table 1. An Overview Of Regulations And Data Requirements
Source: Moody's

Table 1. An overview of regulations and data requirements source: Moody's

The key takeaway is simple: To facilitate an effective risk decisions, data must be turned into the right information and delivered to the right people in an understandable format.

 

Understanding analytical data

Analytical data is the data a bank uses in its regulatory and business risk reporting, and which is the subject of the BCBS principles. What comprises analytical data in a bank? Figure 1 illustrates the four main categories of analytical data — finance, asset and liabilities, risk, and capital planning.

A data project generally doesn’t fail because of technology — it fails because the business wasn’t able to define its data requirements. Defining these requirements, however, is no easy task. According to a whitepaper by the International Data Corporation (IDC), the global pool of data or the “Global Datasphere” is predicated to reach 175 zettabytes by 20253 — and with one zettabyte equalling nearly 1.1 trillion gigabytes, that’s an almost unimaginable amount of information to contend with.

Analytical data is different from the operational or transactional data that banks traditionally use in that it:

  • Is sourced from different types of systems — finance, assets, capital modeling, and risk systems — many of which are highly specialized and desktop-based. These systems in turn rely on data from core administration systems.
  • Requires a high degree of granularity to support multi-dimensional reporting (e.g., interactive dashboards).
  • Often has to be aggregated and consolidated.
  • Must be readily available, accurate, and comprehensive (covering all risks) to support monthly, quarterly, and annual reporting cycles — as well as ad hoc and real-time analyses.
  • Is used primarily in regulatory, business, and financial reporting and to support risk and capital decision-making.

 

Figure 1. The Four Main Categories Of Analytical Data
Source: Moody's

Figure 1. The four main categories of analytical data source: Moody's

For the most part, analytical data is used to support monthly, quarterly, and annual reporting cycles. However, senior management is beginning to look for more real-time data, such as daily market risk dashboards and continuous solvency monitoring. A high level of granularity is crucial.

 

Identifying key problems in data and risk management

Increasingly, banks are realizing just how dependent they are on the quality of data within their calculations. The fundamental problem is banks have an extensive amount of analytical data stored in multiple systems, which have their own data models, standards, and technology. This results in seven key problem areas:

  • Massive amounts of data. Banks have large amounts of different types of data stored in multiple, siloed systems based on entities, lines of businesses, risk types, etc. Many of these systems are old and antiquated, with no standardized data models or data sets. They are essentially legacy systems. After identifying what systems contain analytical data, extracting, standardizing, and consolidating it is the next challenge.
  • Lack of a common data model and standards. Few banks have an enterprise-wide analytical data model, which makes standardizing data sets and aggregating data difficult.
  • Reliance on manual data processes. Banks employ workers to review and validate data and find “gaps.” This manual approach is slow, costly, and almost impossible to analyze and audit.
  • Low-quality data and audit trails. Banks struggle with the poor quality of data held in their core systems due to input errors, unchecked changes, and the age of the data. This limitation is compounded by the fact that banks often have multiple loan, credit card, asset, administration, and finance systems with no common data (or metadata) models. The lack of sound audit trails and lineage is an issue, particularly with legacy systems and manual processes.
  • Structured and unstructured data. A significant amount of data within a bank is unstructured (e.g., information that does not have predefined relationships), such as within portfolios or derivatives, and is not stored in existing centralized databases. Aggregating unstructured data and combining it with structured data is the key challenge.
  • Accurate counterparty data. A particular problem for banks is getting the deep, accurate, and granular counterparty data essential for credit risk modeling — classification, jurisdiction, entity type, etc. Complexity increases with guarantors and insurers between layers. Counterparty data can also come from multiple sources.
  • Regulatory compliance. A plethora of regulatory initiatives focus on accurate and correct data at the right level of granularity with full audit trails. Banks not only need a governance frameworks but also IT platforms that actually deliver and aid compliance.
     

Figure 2. Data management and governance framework
Source: Moody's

Figure 2. Data management and governance framework source: Moody's

It’s also important that banks implement a data governance and IT architecture framework, as well as quality standards that not only meet the requirements of BCBS but also satisfy the needs of both internal and external auditors. This governance framework must be supported by an IT architecture that manages and automates the data management and reporting processes.
 

Building a data management and reporting governance framework

Technology is a key element in analytical data management and governance, particularly for quality control, auditability, and delivery of information. Figure 2 depicts one possible data management and governance framework using a number of integrated technology components.

  • Source Systems: All banks have multiple core banking systems (client, loan, credit, etc.), as well as specialist treasury, asset, finance, forecasting, and modeling systems from which analytical data needs to be extracted.
  • ETL Tools: Extract, Transform, and Load (ETL) tools extract data automatically from source systems, transform it into a common format, and load it into data quality tools or directly into a data repository.
  • Data Profiling and Quality Tools: Data Profiling tools automate the identification of problematic data before loading it into the repository, collect statistics about that data, and present it in a useable report-based format. Data quality tools automatically improve the quality of data based on logic, rules, and algorithms supplemented by expert human analysis.
  • Analytical Data Repository: This repository is a relational database that stores analytical data in a structured and accessible format for querying and reporting. It typically consists of a staging area where “raw” data can be loaded before quality checking and validation and a results area where approved data can be locked down for reporting purposes. Organizations can have a single repository with multiple built-in datamarts or dedicated data repositories for each major type of data.
  • OLAP Cubes: OLAP cubes are multidimensional views (constructed by IT) on the data tables stored in the repository, which enable data to be loaded into reports and dashboards. Perhaps one of the most interesting developments is the ability to enrich validated data with extra data from external sources, such as the sociographic or demographic data of policyholders or buying habits from supermarket chains. This widening of data types is particularly useful for enhancing the single view of a customer. It does, however, add a layer of complexity to the system.
  • Reporting Engine: This engine is a technology that interrogates the repository in a structured manner based on OLAP cubes to physically produce and render reports, dashboards, and queries.
  • Enterprise-Wide Data Model: Effectively, this is a common map of all the analytical data elements that an organization needs, and which should be used by all risk systems. Without a common data model, no data standards can be imposed to aid user understanding, making ETL and aggregation much harder.
  • Workflow Engines: Data management and reporting tasks can be defined, documented, and then executed and controlled by a workflow engine.
  • Governance and Compliance Framework: This is a reporting governance and reporting governance framework supplemented with internal and external auditing practices.
     

Improving data quality

A key element in the management of data is improving the quality of raw data held in the source and modeling systems. Figure 3 illustrates a detailed process to improve data quality.

Within the data quality process, there are two factors to consider:

First, data quality should not be regarded as a one-off process. New data is always emerging, so there has to be a process to monitor data quality on an ongoing basis. The documentation of this process should be automated as much as possible.

Second, data continues to diversify. Perhaps one of the most interesting recent developments is the ability to enrich validated data with extra data from external sources, such as the sociographic or demographic data of policyholders or buying habits from supermarket chains. This widening of data types is particularly useful for enhancing the single view of a customer. It does, however, add a layer of complexity to the system.

 

Eight critical success factors

Given the complexity of the types of data, and the effort required to process it for reporting, where do the banks go from here? The following is a list of eight critical factors for success in implementing an effective data management framework to support risk management and compliance within a bank.

  • In relation to data, banks have to think of IT as a profit center rather than just as a cost center. Data is key not only for regulatory compliance and reporting but also for the business decision-making process.
  • Banks need to understand the value of analytical data to the organization. They should develop an enterprise data model and standardize data sets across the bank. This will break down silos and greatly help aggregation and analysis.
  • Ensuring the quality of analytical data is critical; without this, the accuracy of all generated risk and capital numbers becomes questionable.
  • Data quality is not a one-off exercise — it must be treated as an ongoing process, which is documented and reviewed regularly.
  • Most banks already have some components that could form the foundation of a data management framework. It’s important to leverage existing technologies as a starting point and then enhance them with new components as required.
  • Regulatory reporting is highly prescribed, but business reporting is less so. The latter is dependent on business practitioners to define precisely the information they need in reports and dashboards. Thus, the business must liaise closely with IT to define its reporting requirements in terms of information, drill-through capabilities, frequency, delivery, etc. IT can then consider the data, data structures, source systems, and gaps that need to be filled to meet those needs.
  • Banks should ensure there are capabilities and processes in place to meet ad hoc reporting requests from supervisors and demands during crises.
  • While spreadsheets remain an important element of analytical data, they need to be carefully managed and controlled.
     

Figure 3. Detailed Process For Improving Data Quality
Source: Moody's

Figure 3. Detailed process for improving data quality source: Moody's

Put your data to work

Building an effective data management framework for analytical data will enable banks to enhance their regulatory and business reporting and be well positioned to flexibly scale with their data needs. 

Simply put, banks need custom-built data solutions designed to both uncover existing patterns and create new ones. At Moody’s, we can help do that and more.

Sources:

1 American Banker, 9 Big Data Challenges Banks Face, Penny Crosman, August 2012.

2 Basel Committee on Banking Supervision, Principles for Effective Risk Data Aggregation and Risk Reporting, January 2013.

3 IDC, The Digitization of the World From Edge to Core, David Reinsel, John Gantz, & John Rydning, 2018. https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf


Learn more

Moody’s banking solutions

Bringing together data, experience, and best practice capabilities, with our specialized and agile intelligence, Moody’s banking solutions empower banks to adapt confident and efficient decision making, to ultimately drive growth and meet strategic goals.