Tuesday, September 13, 2011

Loss Events: Prevention is Better Than Detection

With the inventory of potential loss events growing faster than banks' ability to remediate them, what can be done?

With operational losses in some institutions running at over 25% of profit, it’s not surprising that in the current financial environment banks are looking to reduce operational failures that give rise to monetary losses and reputational issues. But the tools that banks are employing are designed to detect errors, not reduce them. This is resulting in project slippage and an inadequate return on investment.

If there is anything that comes close to keeping the chief operating officer (COO) awake at night it is the monthly ‘operational losses’ report. These reports detail - often gruesomely - the operational loss events that have come to light over the past month. The worrying aspect for many COOs is not just the magnitude of the individual losses, but also that each month a different part of the operating process is at fault.

Reading these reports is a ‘reality analysis’. These are real losses with a material impact on the bottom line. Some institutions are incurring annual loss events that are in excess of 25% of profit.

A further concern is that often the original error occurred a long time before the loss event was detected. In instruments with a long tenor, years can elapse between an error being made and the loss event being detected and recorded.

With such a dramatic impact on the cost/income ratio of the bank, it is not surprising that banks are investing in a complete analysis of their processing infrastructures to try to reduce both the incidence and materiality of these loss events.

Current Tools Were Not Designed to Reduce Loss Events

There is a lack of visibility into the internal processes of the bank and often all of the internal controls fail to determine that an error has been made which will ultimately lead to a loss event.

Organisations rely on bank reconciliations as a final detection point but these systems, designed only to ensure that the general ledger is consistent with a bank statement, are not suitable for preventing the loss event occurring in the first place.

Bank reconciliation software is designed to run at the very end of the process and only after all the prior steps in the process are completed. This means the reconciliation department is only ever using hindsight, and while a wonderful thing, it can never give the insight required for immediate, proactive action.

Acting as a detection point at the end of a complex processing chain, reconciliation software is batch-based and lacks the capabilities needed to handle the complexities inherent in the overall process chain.

Where Do the Loss Events Come From?

A decade or more ago middleware vendors were selling their software to the banks based on delivering straight-through processing (STP) and ‘connecting islands of automation’. This was a legitimate - albeit tactical - response to the problem that banks then faced: Each department had invested in its own IT solution and as a trade was moved from department to department it was frequently re-keyed into each solution. The use of middleware has glued together the departments so that re-keying has largely been removed. However, processing infrastructures remain silo-based, with each department mostly oblivious to the happenings either upstream or downstream of themselves.


It is common to imagine that a transaction moving between the front office, middle office and operations, will only move between three systems. In reality, there are many systems involved in even the most straightforward of financial products and a spaghetti-like network of applications, processes, middleware and Excel and Access databases is used. Within this infrastructure there are known (and sometimes unknown) weaknesses that either fail to be addressed or require the best efforts of a middle office or back office clerk to remediate. On busy days, when the middle office guru is away, or when the leading operations clerk falls sick, these manual processes aren’t enacted correctly and errors are introduced.

These complex processing chains exist for each of the major product lines. A universal bank will have an equities chain, a fixed income chain, a derivatives chain, an foreign exchange (FX) chain and a traditional ‘bank’ chain. These chains do not operate in isolation, and hedging, accommodation and risk management strategies give rise to interactions between the processing chains.

In our experience, where two processing silos consume the services of each other, the opportunity for mistakes to be made is significantly increased, particularly where interactions have grown somewhat organically and inconsistently. For example, the way the fixed income chain interacts with the FX chain is different from the way the equities chain interacts with the FX chain. These differences are reflected in different manual process and inconsistent system approaches that further the opportunity for loss events to take place.

Many banks are global but different local offices will have access to different credit lines, brokerage fees, and relationships. So, for example, often the Frankfurt Office will take advantage of a London credit line by booking a trade through the London infrastructure and simultaneously ‘back-to-backing’ that same trade into the Frankfurt infrastructure. These inter-company trades are themselves hedged and accommodated across a now global process chain. So a process that is itself complex within a silo is further complicated across silos within the same legal entity, and becomes much more complicated across geographically disparate entities. These complexities are addressed by yet more manual processes and system ‘gizmos’ (small utilities built to handle some part of the complexity).

The real surprise is not that loss events take place; it is that they only cost 25% of profits. And if this complexity wasn’t enough to handle we now have to consider the effect of innovation.

Financial Product Innovation

If a trading organisation is to stay competitive it has to find new ways of serving its customers, many of whom are also demanding access to a wider and deeper range of products. Corporates used to only hedge their FX risk with FX forwards, but a more sophisticated corporate will now use a range of FX products including cross currency interest rate swaps to manage their risk. Trading organisations have responded to these pressures by creating new financial products.

The existing infrastructure is unlikely to be able to process all aspects of these new products correctly and the products tend to be shoe-horned through one or more processing chains. Many organisations trading warrants still push the warrant through a system only designed to process exchange traded options. Quite often, further gizmos are built to help clerks process these instruments and to smooth out any processing subtleties. At first an innovative instrument is likely to be traded infrequently, but with innovation comes reward and volumes can grow rapidly.

The inadequacies in these shoe-horned, manually intensive processes are then truly exposed and often a brake is put onto the front office to limit how often an innovative, but highly profitable instrument, is traded. So not only do these complex infrastructures subtract from the bottom-line through loss events, they also subtract from the top-line by being too rigid to provide an environment that allows innovation to be correctly controlled.

A New Approach

At the heart of these problems is a lack of certainty; transactions enter the infrastructure at the front end and they emerge at the back end, where a bank reconciliation platform ticks them off. Unaware of the complexities of the infrastructure, the bank reconciliation platform can only detect a small subset of the problems. It is akin to a quality control process checking a new car off the production line without knowing what the customer has ordered. It might have four wheels, a steering wheel and an engine, but if the customer has ordered it in blue and it is green, then despite the reconciliation tick we are going to have an unhappy customer. (Plus, repainting the car blue will require much more re-work now that the seats are fitted than if the error had been detected immediately).

What is needed is a real-time control system that can constantly monitor the transaction on the financial services production line, making sure that at every step the transaction is correct. A control system that is built from the ground up to handle the complexities described earlier, that is real time not batch, and that is capable of ensuring that a bond trade booked by New York accessing a London credit line and accommodating a client who wants to pay for the bond in euros can be correctly managed across the entire process infrastructure. With this level of control comes visibility, proactive management of transactions and real-time certainty across the entire processing chain.

Organisations need to move from control points being added as an after-thought, to control being built into the process the moment an innovative financial product is created. Operational risk management of these products needs to be incorporated from the outset and this can only happen if the tools available are fit for purpose.

0 comments:

Post a Comment