Strategy - When Resilience Is More Important Than Efficiency

Related Expertise: Change Management, Smart Simplicity, Corporate Finance and Strategy

When Resilience Is More Important Than Efficiency

By Martin Reeves and Raj Varadarajan

The recipe for streamlining an enterprise is familiar: benchmark costs against those of competitors, set cost reduction targets in each area to par (adjusted for scale and scope), and implement. Or, even more simply, set and pursue the cost reduction targets required to increase profitability to desired levels. It seems like a matter of simple arithmetic and an infallible recipe for increasing profitably—but this is not necessarily so.

Take the example of a global airline that was less profitable than its competitors. The reasonable approach, it seemed, was to increase the utilization of each of the most important components of cost—pilots, planes, and flight attendants—thus reducing resource intensity to industry benchmark levels. Benchmarking seemed to reinforce this logic, given that costs for these items indeed exceeded competitors’ costs.

However, closer inspection revealed that the entire system was greatly interconnected. And we know that complexity grows as systems become larger and more connected, and that when that happens hidden costs generally soar beyond costs that can be explicitly planned. 

In our example, delayed pilots, flight attendants, or planes each had the potential to trigger a chain of delays throughout the system. (See Exhibit 1.) To be prepared to counter these cascades, the airline maintained extra resources (spare planes, reserve pilots and flight attendants, extra gate agents and maintenance staff, spare gates, and so forth). Delay-inducing perturbations were seen as exogenous, uncontrollable factors, and the spare resources were regarded as just a “cost of doing business.” Removing the buffers would have reduced planned costs and thereby increased efficiency—but it also would have amplified interdependence and fragility and ultimately made matters worse.

A better solution was to fundamentally reshape the system itself, reducing complexity and interdependence by keeping ensembles of pilots, planes, and flight attendants together. While this seemed, on paper, less efficient than reducing each resource to the optimal level, it led to greater resilience against delays and their ramifications and thus improved overall cost-effectiveness.

The assumption that “optimal is operable” is likely made every day in many industries. It rests on a number of apparently reasonable assumptions that aren’t always right:

  • That a system can be understood by looking at its parts
  • That optimizing parts will result in optimizing the whole
  • That dynamic behavior of the system is a given, a constraint to be “lived with”

Such oversights are understandable. Financial accounting focuses on cumulative revenues and costs, and there are no standard methods or metrics for measuring resilience or complexity. And the Taylorist approach that underpins mainstream management thinking begins by decomposing complex tasks into simpler ones and optimizing and managing each one independently.

However, when the number of interconnections is high and when there is volatility in supply or demand, a more dynamic and systemic view of the enterprise is called for. Under these circumstances, the behavior of the overall system is unlikely to be reflected in an analysis of the parts, especially a static analysis. Local perturbations are likely to have unpredictable nonlocal effects. One of the impacts of digitization is that companies have become more interconnected and that fluctuations are transmitted instantaneously, which means that the boundary of the system to be considered needs to be expanded beyond the individual enterprise.

Many managers will be familiar with the idea of systems thinking, but what are the practicalities of implementing a systems perspective to organizational effectiveness? While the behaviors and remedies for each system are unique, a number of common principles can be employed.  

Determine if a systems approach is necessary. A systems approach is less straightforward than a traditional static analysis and should therefore be deployed only where beneficial. If the system comprises many interacting parts and is exposed to a high degree of fluctuation in supply or demand conditions, then a systems approach may be necessary. High fluctuations in stocks or flows or instabilities cascading across the system are also indicative symptoms. The impact will be most severe when a resource with high inertia, such as a physical factory, is exposed to rapid fluctuation. Such situations are often found at digital-physical interfaces. These circumstances clearly apply to our airline example, in which there were fluctuations in resource readiness, a set of resources with high inertia, and avalanches of delays.

Consider dynamic, nonlinear effects. While the “physics” of a system may look simple and linear, the associated human dynamics may be far from linear, which may force a systems approach. Change management, for example, needs to factor in fluctuating attitudes, cascading beliefs, resistance to change, and other factors. Years of working within a traditional organizational paradigm build behaviors focused on minimizing costs in a particular silo, regardless of downstream knock-on effects, which are often worse.

Observe the system’s behaviors, including human behaviors, and identify the ones that you need to reshape. For example, you may want to minimize use of the most expensive or least flexible resource, and to do so, you may need to eliminate fluctuations. In our example, the problem was cascading delays and the high (sometimes invisible) cost and difficulty of buffering those delays with expensive resources.

Map and understand the system as a first step in redesigning it. Create a map to identify inputs, key resources, linkages, and positive and negative feedback loops. In our airline example, flights in and out of one hub were observed in order to understand how delays propagated and how that gave rise to higher use of expensive buffer resources than would be called for in a much simpler model.

Use the map to create a model and see if you can re-create symptomatic behaviors, qualitatively and quantitatively. In our example, scheduling different critical resources independently required expensive buffers or resulted in cascading network-wide delays, and the model that was built replicated these outcomes.

Use the model to formulate intervention strategies to modify undesirable behaviors or create new, more desirable ones. In simple, linear systems, interventions can be as straightforward as specifying a desirable profitability level and adjusting inputs (and/or intermediate operational KPIs) to attain the goal using financial tautologies. Things are not so simple in complex, nonlinear systems, as with our airline example. Often, direct action will have unintended consequences, so counterintuitive solutions—like increasing “planned use” buffers to add flexibility and increase stability—may be necessary. Indirect interventions—such as changing the goals of the agents in the system, aligning beliefs, shaping incentives, or streamlining decision processes—may be more effective than directly manipulating each component. In our example, the key insight was this: adding some flexibility to the system by “suboptimizing” the plan actually decreased the overall “as operated” costs.

Avoid the trap of incremental solutions, which are often either insufficient or hard to design in a complex system. A clean-sheet redesign of the system will often be necessary. This can be achieved by rebuilding the system from the bottom up, cognizant of the behaviors to be acquired or avoided. In our example, maintaining or increasing buffers was financially unacceptable, so a more fundamental redesign was required. The pivotal insight from modeling was that fluctuations could be reduced by keeping planes, pilots, and flight attendants together and making many simultaneous changes to operating rules. The network and the operating rules were redesigned bottom-up around this principle and modeled, resulting in fewer cascading delays. (See Exhibit 2.) 

Test solutions experimentally before deploying them system-wide. While system mapping and modeling will provide some guidance on suitable interventions, the resultant model may not capture the full complexity of the system, especially human behaviors fine-tuned by years of operating in one paradigm. Experimentation will be necessary to test solutions. This is critical because dynamic systems are not susceptible to deductive analysis. Any model is an approximation, and moving directly to implementation could be risky and expensive. In our example, the proposed solution was tested on a subset of the network and, after promising results, rolled out to the whole network.  

Measure and manage for dynamic factors. Once installed, the new system should not repeat the approach of the past by measuring and managing only period averages and static efficiency. It should also monitor dynamic variables like resilience, complexity, and fluctuation, in order to continuously improve. It is easy in hindsight to deem as myopic the fact that several “key variables” were not measured or tracked. Entities do what is necessary to compete in a context. New paradigms and ideas are required when the context changes. Necessity is the mother of invention, but it can take a long time to snap an organization out of the wrong mental paradigm. The changes required are as much mental as physical.

Don’t settle for generic solutions. Sometimes installing off-the-shelf operating systems, like agile, lean, Six Sigma, or Total Quality Management, can address system dysfunctions, but no system architecture is a panacea: there is no general solution for all dynamic systems. Managers should be suspicious of general solutions to specific challenges. Depending on the context, reducing variance can reduce learning, increasing efficiency can increase instability, or fast iteration can cause complexity and failure. There is no shortcut to looking at the specific details of each situation.



Sometimes the right approach to redesigning an enterprise is a simple, static one. But often it isn’t, and in those cases a systems approach is needed to reach a solution that addresses dynamic factors like resilience. Such situations, we predict, will arise more and more often as enterprises embrace digital technology and build fast connections with other enterprises. Managers would be well served by mastering the art of applied systems thinking.


The BCG Henderson Institute is Boston Consulting Group’s strategy think tank, dedicated to exploring and developing valuable new insights from business, technology, and science by embracing the powerful technology of ideas. The Institute engages leaders in provocative discussion and experimentation to expand the boundaries of business theory and practice and to translate innovative ideas from within and beyond business. For more ideas and inspiration from the Institute, please visit Featured Insights.

BCG Henderson Institute Newsletter: Insights that are shaping business thinking.

BCG Henderson Institute Newsletter: Insights that are shaping business thinking.