News & Commentary

02:18 PM
Austin Trippensee, Oracle
Austin Trippensee, Oracle

Ready or Not: Time to Prepare for New Approach to Liquidity Risk

Banks will be required to have comprehensive liquidity risk management programs and must be able to clearly identify and assess enterprise-wide liquidity risk under normal and extreme market conditions -- as well as develop strategies to effectively bridge liquidity gaps.

The financial services industry stands on the threshold of a new era for liquidity risk -- driven largely by the bellwether Dodd-Frank Act in the United States and the adoption of Basel III in Europe.

Requirements for liquidity risk are becoming prescriptive as regulators shift the industry from a principle-based approach to a rules-based approach to liquidity management. Regulators are well on their way to establishing well-defined rules and specific liquidity ratios, based on definitive investment and lending conditions.

While not fully defined, many in the industry expect that Dodd-Frank requirements for liquidity risk will align with those under Basel III, which require banks to set minimum liquidity based on a stress test using standardized calculations. Basel III introduces a Liquidity Coverage Ratio (LCR) that requires banks to maintain a stock of "high-quality liquid assets" sufficient to cover net cash outflows for a 30-day period under a stress scenario. In addition, the agreement establishes a net stable funding ratio (NSFR) test, which will measure the level of liquid assets to the level of liabilities that mature in a year or less.

Moving forward, financial institutions will be required to have a comprehensive liquidity risk management program and will need to be able to facilitate compliance and promote sound operating practices by clearly identifying and assessing enterprise-wide liquidity risk under normal and extreme market conditions, and developing strategies to effectively bridge liquidity gaps.

While the specifics of new liquidity requirements are still under debate, banks cannot afford to sit idly on the sidelines. Those that begin to lay an agile foundation will ease compliance when the time comes and gain a competitive advantage in the market.

Where do they start?

Begin to realign. Over the past few years, banks have faced the reality that that there is a non-severable link between risk and profitability, which are really just two sides of the same coin. As we have seen many times in recent history, miscalculating risk -- liquidity or otherwise -- can lead to significant financial loss. Better alignment between a bank's risk and financial functions is an important step toward compliance.

Closer alignment can also help to drive improved performance, as documented by a recent study by the Economist Intelligence Unit (EIU) and sponsored by Oracle Financial Services. The global survey (which included nearly 200 senior banking and risk executives from financial institutions, as well as in-depth interviews with 16 finance and risk executives, corporate leaders and other experts), "Transforming the CFO Role in Financial Institutions: Towards Better Alignment of Risk, Finance and Performance Management," found that financial institutions scoring themselves highly on their ability to align risk and finance functions appear to be doing better financially than their peers.

In some cases, organizations may decide to realign reporting duties to have risk and finance executives reporting to the same C-level entity. Cross-functional teams will also be increasingly important, especially as stress tests become more complex and comprehensive. For example, regulators may ask a financial institution to assess the impact of a particular scenario not just on risk-weighted assets but also on liquidity risk and profitability -- which are squarely in the finance domain. Today, in many organizations, the resources needed to fulfill regulatory requirements might reside in separate teams that do not regularly collaborate. Moving forward, some organizations may want to have some individuals who span both the risk and finance teams to facilitate better communication and collaboration.

Prepare for Stress Testing. Stress testing in this environment is here to stay and will go far beyond the standardized and sweeping industry-wide tests ordered in the wake of the financial crisis. Instead, the industry will see a move to more frequent institution-specific stress testing -- including stress testing for liquidity -- that becomes a follow-up to regularly scheduled periodic reports. Regulators will expect the results of the tests quickly, often in just a few days, as they will assume that banks will have in place an established risk data taxonomy.

With thousands of possible simulated events that could lead to portfolio failure, there needs to be a root-and-branch transformation of the allocation of both assets and liabilities within asset and liability management to ensure that liquidity rations can be calculated on not just a weekly, monthly or quarterly basis but on a daily and -- in banks trading with complex positions -- on an intraday basis.

In the new reality, stress testing, therefore, has to be baked into financial institutions’ standard operating procedures. They must be equipped with a risk management infrastructure that enables new levels of agility in setting up, running, analyzing and reporting on risk scenarios -- including those for liquidity risk.

Build an adaptive infrastructure. A comprehensive infrastructure for risk management that can provide visibility across the entire enterprise and deliver insight when and where it is needed is another important first step in preparing for compliance around impending liquidity risk requirements. Financial institutions will have to supply significantly more data and they will be delivering it to an expanded universe of regulatory authorities with greater frequency than ever before. They must, therefore, build a comprehensive infrastructure for risk management that can provide visibility across the entire enterprise and deliver insight when and where it is needed. Aggregating all of an organization's risk-related data in a single environment is an essential first step. As we saw from the financial downturn of the recent past, the ability to connect developments and indicators in one part of the business with those in other parts of the business is essential to determining an organization's true strengths and vulnerabilities.

New requirements for liquidity risk are on their way. While awaiting final details, forward-thinking financial institutions can take several important steps today to lay a foundation for smooth compliance. As important, these same steps and investments will significantly increase operational visibility to yield immediate, as well as long-term benefits.

Austin Trippensee is Director, Financial Services Risk Applications, at Oracle.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Janice, I think I've got a message from the code father!
Current Issue
Security Operations and IT Operations: Finding the Path to Collaboration
A wide gulf has emerged between SOC and NOC teams that's keeping both of them from assuring the confidentiality, integrity, and availability of IT systems. Here's how experts think it should be bridged.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2017-05-09
NScript in mpengine in Microsoft Malware Protection Engine with Engine Version before 1.1.13704.0, as used in Windows Defender and other products, allows remote attackers to execute arbitrary code or cause a denial of service (type confusion and application crash) via crafted JavaScript code within ...

Published: 2017-05-08
unixsocket.c in lxterminal through 0.3.0 insecurely uses /tmp for a socket file, allowing a local user to cause a denial of service (preventing terminal launch), or possibly have other impact (bypassing terminal access control).

Published: 2017-05-08
A privilege escalation vulnerability in Brocade Fibre Channel SAN products running Brocade Fabric OS (FOS) releases earlier than v7.4.1d and v8.0.1b could allow an authenticated attacker to elevate the privileges of user accounts accessing the system via command line interface. With affected version...

Published: 2017-05-08
Improper checks for unusual or exceptional conditions in Brocade NetIron 05.8.00 and later releases up to and including 06.1.00, when the Management Module is continuously scanned on port 22, may allow attackers to cause a denial of service (crash and reload) of the management module.

Published: 2017-05-08
Nextcloud Server before 11.0.3 is vulnerable to an inadequate escaping leading to a XSS vulnerability in the search module. To be exploitable a user has to write or paste malicious content into the search dialogue.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.