Placing risk data aggregation and data quality as strategic priorities
As banking and supervision enter a period of heightened uncertainty, with accelerating digitalization and AI adoption, data quality has emerged as a cornerstone of effective risk management and supervisory decision-making. Supervisors rely on increasingly granular, timely data, while banks invest in AI-ready data capabilities and internalize data quality checks.
Yet institutions must contend with legacy system constraints and mounting obligations amid initiatives such as BCBS 239 compliance and the ECB’s Next Level Supervision.
At the same time, collaborative efforts like the Joint Bank Reporting Committee’s (JBRC) semantic integration work promise harmonized definitions and integrated reporting, paving the way for a future where simplification and supervisory expectations can coexist.
In this fireside chat, Commerzbank and European Central Bank (ECB) experts unpack what it takes to strengthen data quality at scale and build a resilient, data-driven regulatory ecosystem.
"Better harmonization of the definitions used in the different reports would ease the life of banks and supervisors alike. There are still too many different sets of validation rules within the various NCAs, and also between the ECB and the EBA. Integrating these will simplify life for banks and, at the same time, increase data quality."
Giancarlo Pellizzari, Head of the Banking Supervision Data Division, European Central Bank (ECB)
"You need to have responsibilities for data in various parts of the bank. It’s essential that the people doing business understand the data, because everyone else in the data chain is working with it afterwards."
Nils Gerstengarbe, Director in Regulatory Reporting within Group Finance, Commerzbank
Director in Regulatory Reporting within Group Finance Commerzbank
Head of the Banking Supervision Data Division European Central Bank (ECB)