08 Sep CECL Issues – Unpacking the Biggest CECL Challenge
Unpacking the Biggest CECL Challenge
By Adam Mustafa, Invictus Group President and CEO
Lack of data is by far the biggest obstacle for banks as they begin figuring out how to implement the new current expected credit loss (CECL) accounting standard, which will go into effect for many public banks in 2020. Even the simplest of methods, such as the open pool method, require a certain amount of historical loan-level data.
Banks face a litany of problems with respect to leaning on internal data for CECL, but here are the three most common obstacles:
- They don’t have enough historical data. Some banks have kept internal data for the last three or four years. Some can’t even amass that much data. Perhaps a bank did a core conversion and lost all of the data that was stored on the previous core. Maybe it has data that goes further back, but it’s incomplete and riddled with mistakes, so the management team isn’t confident in its use. Maybe the bank is relatively new to a certain type of lending, so no historical data exists. Whatever the reason, if a bank lacks history, it is very difficult to derive life-of-loan-loss estimates for CECL, irrespective of the methodology it deploys for a given pool.
- The bank’s loan loss history will not provide the data it needs. The last six or seven years have been relatively good for banks. So even if a bank has data that can go back that far, it may still struggle with calculating observed loss estimates to use as a starting point for CECL. This is a particular problem for banks that grew rapidly since 2008, and any de novos that began in 2006 or later. These banks made nearly all of their loans during our so-called good times. Not having many loan losses is supposed to be a good thing, but apparently not when it comes to CECL data needs. As the financial crisis showed, past performance is not indicative of future results. That is the very reason why the Financial Accounting Standards Board created the forward-looking CECL standard. It is both impractical and illogical to calculate CECL reserves from a small loss history. Absent a solution, the net result will be over-reliance on qualitative factors, and that may lead to an unnecessarily high reserve and unhappiness from your auditor.
- Banks are missing critical data elements. Many banks don’t have loan-to-value (LTVs) metrics in their loan level data and the vast majority don’t have easy access to other critical metrics such as debt service coverage ratios (DSCRs) and credit scores.
What Can Banks Do to Solve the Problem?
Start collecting the missing data points going forward. While this may not help solve the entire issue, it will be useful by the deadline for CECL implementation. Having a deeper history would give banks a stronger hand in terms of supporting loan loss estimates, but at least collecting missing data is a start.
Begin looking for external data. The right external data can not only fill gaps, but it also can augment and strengthen the reliability on internal data.
CECL will have winners and losers. Winners will preserve precious capital by making sure their CECL reserve is optimized. The losers will undoubtedly wind up burning shareholder value because of excessively high reserves driven by overreliance on qualitative factors. What will separate the winners from the losers will be the quality and quantity of data they use to support their conclusions.
I have discussed CECL with hundreds of bankers and only a handful can say with confidence that they have enough internal loan-level data to calculate their CECL reserves.
What can banks do about these internal data challenges? The truth is there is no panacea. However, more data is always better than less. The banks that have the best data and analytics will end up with a competitive advantage. Banks that approach CECL seriously will recognize they cannot just rely on their internal data and will take steps now to find external data to enhance their CECL process.
Editor’s Note: To address the data problem, Invictus will soon be inviting banks to participate in the BankGenome™ Project, a data-sharing cooperative.