Choose your location to get a site experience tailored for you.

Remember my region and language settings
Simpler, Faster, and More  Efficient Operations in  Financial Services

Related Expertise:Financial Institutions, Cost & Efficiency, Operations

Simpler, Faster, and More Efficient Operations in Financial Services

15 August 2019 By Thorsten Brackert , Gregor Gossy , Lukas Haider , Andreas Keller , and Reinhard Messenböck

Financial industry leaders know that great performance is built on consistent operational excellence, keeping a tight rein on costs, and (increasingly) using technology to work more efficiently and to deliver smarter, faster services. The biggest challenge for most, however, is execution. Many have found a yawning gap between operational ambition and reality.

Operational underperformance plays into the hands of digitally native companies such as fintechs, many of which are targeting financial industry revenue pools in areas including payments and trade finance. These companies are usually operationally efficient by design, running highly automated business models that score big when it comes to customer satisfaction. Many incumbents, by contrast, are saddled with a chaotic legacy of operational processes and are marooned at the bottom of customer satisfaction rankings.

Financial institutions urgently need to change the way they play. That means addressing operational weaknesses and, by extension, transforming the customer experience. We believe firms that simplify operations and systematically automate—a process we call zero operations, or zero-ops (modeled on zero-based budgeting) —can reduce costs by as much as 80% and increase revenues by enabling all-around superior service.

Three Steps to an 80% Decline in Costs

Given their complex business models, legacy IT, and entrenched ways of working, financial institutions cannot easily reinvent themselves as fintechs or large tech firms. But they can do something: take a more practical approach to cost cutting and improving the customer experience. This means imposing order on the inherently diverse operational landscape and being systematic about implementing change.

We define zero-ops as an idealized target state in which all operational work is either removed or automated. The concept has its roots in the clean-slate mindset encapsulated in zero-based budgeting. It starts from the idea that all operational work can be automated and requires that manual processes or process steps be reassessed and justified against that target.

By implementing zero-ops, financial institutions can cut costs by as much as 80% and put themselves in a much better place to play to their potential. (See Exhibit 1.) The reason? Zero-ops leads to much lower error rates and faster execution, and it enables the seamless digital service that customers increasingly expect. Streamlined processes manifested in zero-ops, meanwhile, create cleaner data trails and therefore more complete, consistent, and reliable records. These can be used to improve everything from product offerings to compliance and reporting. Finally, getting operations right can boost job satisfaction and empower employees to deliver better, especially in the front office.

In practice, zero-ops comprises three steps:

  • The first is to simplify, which means identifying and removing operational inefficiencies and redundancies and standardizing processes as much as possible.
  • Next, firms should introduce progressive smart processing, which means automating tasks and processes where doing so delivers the most value. This often involves combining the capabilities of employees and machines.
  • Finally, institutions can move standardized processes that are operated at scale to full automation, albeit supported by human programmers and remediation teams. Processes that are not scaled, such as those relating to products in development, are unlikely to be suitable until they are rolled out across the organization. Still, in an ideal world, almost the entire operational portfolio would eventually be automated.

Simplify Across the Board

Overlaps and redundancies often build up over time. For example, the same data might undergo unnecessary multiple checks, or a failure to consistently apply new regulations across geographies may leave in place older regulations that no longer apply, leading to processes that are no longer necessary. Redundancies may also be located in legacy products held by a minority of customers (which carry a high cost to serve) or services that have been digitized but still sit in operations (because not all channels are using the digital service consistently). A common problem is that the digital, branch, and call center functions handle the same query but produce distinct and sometimes conflicting records.

Institutions need to address these bottlenecks and aim for simplicity in process design, data management, organization, and execution. It makes sense as a first step to run diagnostic tests, designed to quantify the value attached to each task and process. The analysis should be delineated by customer segment and show how individual steps contribute to customer journeys. This should help firms cut through interdependencies and provide a clear link to value delivery.

Until recently, diagnostic testing was a time-consuming and complex process. However, advanced analytics have changed the game, making it much easier to identify and categorize individual process steps, applications, and interfaces. These can be used to draw up product and process maps, activity-based schedules, and application landscapes, which collectively provide insight into:

  • Products and their variations
  • Steps common to multiple processes
  • Flow control and transfers within processes
  • Duplicated information across steps
  • Patterns across processes

Simplification should be about both removing sources of inefficiency and building accountability to keep inefficiencies from creeping back in. The approach, properly implemented, should enable executives to identify high- and low-value processes that can be assessed and prioritized accordingly. This can reasonably be achieved over a period of months. The eventual automation step will be focused on a streamlined set of tasks and processes.

From a leadership perspective, a good point of reference is a minimum viable product (MVP) mindset, which means focusing on the minimum required to deliver products and services. In addition, simplification should not be seen as a one-off, or even periodic, exercise. Instead, it should be set up as a continuous way of working that targets the whole cost base.

Finally, simplification also needs to encompass cultural change. Firms should embrace an agile mindset, focusing on continuous improvement and obtaining broad-based buy-in. Senior managers must lead the change, working to respond to concerns, incorporate feedback, and encourage engagement.

Effectively implemented, simplification by itself can catalyze operating-cost reductions of as much as 30%. In one real-world example, a global bank aimed to cut costs by 20% to 40% over two years. The bank started with an assessment and then launched a fast-track simplification program. Early savings were then applied to fund the next stage of the journey. In a second phase, the bank drafted a three- to four-year roadmap, which aimed to achieve a further 50% reduction in costs.

Selectively Introduce Smart Processing

The simplification exercise should have resulted in a much-reduced operational portfolio. The next stage is to selectively introduce smart processing, which can shave another 30% from the original cost base (and more on a cumulative basis).

Smart processing means injecting new technologies but not reengineering the underlying infrastructure. Firms therefore require a toolkit that can bridge the gap between analog and digital, particularly for standardized or high-demand processes. This should include artificial-intelligence solutions such as image recognition, optical character recognition (OCR), natural-language processing, voice recognition, robotics, and machine learning. AI applications offer both higher processing speeds and greater number-crunching capacity, enabling financial institutions to interrogate much larger data sets. The impact is likely to be felt in accelerated fulfillment, lower exception rates, a broader sales funnel, and a friendlier customer proposition.

There is no simple one-size-fits-all approach to selective smart processing. Instead, leaders must review individual activities, judging the feasibility of smart processing on a case-by-case basis. In general, however, smart processing is particularly useful when combined across end-to-end processes, where it can have a compounding effect on efficiency. For example, when a customer applying for credit sends an income statement via email, natural-language processing can identify the context and connect the email to the credit application. OCR software can then translate the PDF image content into text and extract the data. A machine-learning algorithm will confirm the credit approval and set a risk-adjusted rate based on the applicant’s income. In another example, smart processing can be used to encode rule changes arising from regulation and to flag contraventions that can be picked up by compliance and operations teams for review.

Some financial institutions have already made progress. A universal bank in Asia-Pacific, for example, applied a zero-ops approach to transform its mortgage business. A review of the business’s process framework revealed a large amount of operations activity taking place in the front office and cannibalizing customer-facing activities. A diagnostic initiative highlighted the possibility of a 10% to 20% productivity gain through simplifying front-office processes. A further 40% to 60% gain was targeted in a second phase, which comprised smart processing and automation of individual steps, combined with organization and system consolidation.

Move Iteratively to Full Automation

Innovations in smart processing should be regarded as intermediate steps. The real target end state for zero-ops is a fully automated operational framework. Of course, this is unlikely to happen all at once. The reality is likely to be steady migration over time. However, the task should not be underestimated—a move to full automation requires radical recalibration of the operating model and rebuilding of the supporting infrastructure. Its standalone impact, however, can be a 20% reduction in the original cost base. The real impact at this stage in the process (having already simplified and selectively automated) would be closer to 50% of the remaining costs.

Full automation is achieved when the regular execution of a process requires no human intervention or oversight in the normal course of operations. However, the flipside is that the “field of vision” of operations teams expands. For example, when the wrong data is submitted, the error should be addressed by both operational personnel and UX designers. They will correct the error and fix the systemic issue that led to the error in the first place. Cross-disciplinary teams, meanwhile, should aim for continuous improvements, representing the interests of the business, the customer, and the technology and operations functions. The teams should aim to deploy solutions in days rather than weeks or months. The target end state is that ops specialists no longer focus on operations per se but instead are part of a system dedicated to continuous improvement.

For complex processes, few institutions have made the transition—but full automation is within their reach. In the case of know-your-customer (KYC) processes, for example, it is possible to automate a regular update cycle for customer documents (assuming there are no queries). Copies of customer documents can be collected and stored digitally. OCR and language-processing tools can then be applied to extract key fields and check each document against requirements. If more information is required, the software can send a notification to the customer’s smartphone. Using the smartphone camera, the customer can upload the material in minutes.

These types of transitions are already happening. One European direct bank, for example, found that its operating model and at-scale activities could be improved by standardization. The bank used this approach to imagine a target state, redefine its processes, and identify the right initiatives. A key assumption was that it would be possible to increase the standardization of third-party interfaces. Each manual subprocess, meanwhile, was reevaluated on the basis of currently available or emerging technology. The evaluation led to a series of initiatives aimed at replacing manual with automated subprocesses. The result was a recalibrated operating model in which manual tasks and processes were necessary only for a few special cases that were not scaled and for exception handing. The approach led to the identification of efficiency gains of 40% to 60%.

Getting There: A Scale and Standardization Matrix

Some tasks and processes are likely to be more amenable to smart processing or full automation than others. Early-stage products and services, for example, probably need to become established before automation is contemplated.

A useful tool in segmenting operational activities in terms of their automation potential is a four-quadrant matrix of standardization and scale. (See Exhibit 2.) This allows firms to choose whether to move to smart processing or full automation based on the degree of standardization of a process and the scale of its implementation across the institution:

  • Low Standardization, Low Scale. A few operations activities are resistant to automation because of the significant level of human input required—for example, processes related to legacy products that are being phased out, highly manual processes such as cash handling, and processes related to new products that are still in development. In the case of new products, firms may prefer to hold off on automation until they are certain the product is successful. These products may also require manual support given their experimental nature and changing requirements. However, firms should beware of prototypes that never graduate to full automation and never achieve scale or standardization. They should define rules for whether a prototype should be discontinued or standardized and scaled.
  • Low Standardization, High Scale. In this scenario, products and services are in high demand but a large number of operational resources are performing similar tasks and individualizing delivery. The imperative should be to increase standardization and then deploy smart processing to automate individual steps. For example, customer identification may be a common requirement across products and so can be standardized, while other processes are specific to the product and are likely to remain manual.
  • High Standardization, Low Scale. Some activities are amenable to smart processing because they contain routine or standardized elements but are still performed manually. In this context, smart processing can enable scaling. One example is the processing required to reverse credit card transactions, which is usually handled by a call center. Financial institutions could use smart processing and robotics to move the process onto a mobile app, cutting costs, boosting customer convenience, and enabling a fully scaled solution.
  • High Standardization, High Scale. This segment comprises high-volume, standardized tasks and processes that are operated at scale and are therefore suitable for full automation. Payments are a prime example, and most institutions have traveled a long way down the road of automating them. However, other processes, such as account opening, are equally amenable but currently less automated.

The matrix provides a systematic lens through which to assess operational tasks and processes and to plan remediation steps. Of course, in the real world there may be higher levels of granularity in individual operations that mean that financial institutions must make subjective judgements based on their strategic priorities.




As fintechs and large tech companies pick away at the value chain, financial companies need a sea change in the way they approach operations. It is no longer sufficient to introduce technology at the margins, choosing some parts of the business to automate and leaving others in analog. Firms need a more systematic approach, in which marginal change is acceptable only as a stepping stone to a more fundamental transition. The task is complex but the prize is operational transformation that can push the boundaries of the customer experience and deliver on the bottom line.

protected by reCaptcha

Subscribe to our Financial Institutions eAlert.

Simpler, Faster, and More Efficient Operations in Financial Services

SUBSCRIBE

EN