Cybersecurity: A Human Problem Masquerading as a Technical Problem

Related Expertise: Digital, Technology, and Data

Cybersecurity: A Human Problem Masquerading as a Technical Problem

By Walter BohmayrDaniel DobrygowskiDavid Mkrtchian, and Stefan A. Deutscher

Last month, Uber disclosed that a massive data breach had compromised personal information on 57 million customers and drivers. While the breach was clearly a problem for a company with significant trust issues to begin with, Uber’s original response was even more troubling. Uber reportedly tried to hide a $100,000 payment to the hackers in the ride service’s “bug bounty” program to further conceal the incident.

In Uber’s case, the breach seems to have originated with “two individuals outside the company [who] had inappropriately accessed user data stored on a third-party cloud-based service,” the company said. In other words, it was fundamentally a human problem; access to such data could have been limited and monitored using readily available cybersecurity technology: so-called cloud access security brokers.

Unfortunately, dealing with this type of human problem seems to be harder than one might expect.

Consider October’s congressional testimony on the Equifax data breach, which revealed a number of previously undisclosed facts about that massive security failure. Although the technical problem was an uncorrected vulnerability in an Apache Struts web application, the company’s failure to perform the simple fix of patching the vulnerability was attributable to human error. “The individual who’s responsible for communicating in the organization to apply the patch, did not,” according to former Equifax CEO Richard Smith. Worse yet, the process that Equifax had developed to act as a compensating control—a double-check procedure designed for use in highly critical contexts to avoid reliance on single points of failure—proved vulnerable to failure itself.

Security requires robust technology. Often, though, multiple technological solutions already exist for cybersecurity issues that make headlines when a major breach occurs. Indeed, some large enterprises deploy more than 100 security solutions. But security is principally a human problem, and for that reason it requires human fixes and tradeoffs—nudges to change the behavior of the people who interact with increasingly vulnerable and ubiquitous technology.

The perennial quandary of the security world is how to handle patch deployment. Typically, IT departments deploy patches after a time-consuming approval process designed to avoid end-user discomfort. Because that arrangement entails an opt-in step and an associated time lag, however, it carries a magnified security risk. In many cases involving commercial software, at least, patches could be deployed automatically—a possibility that Rep. Greg Walden (R-Oregon) noted during the Equifax hearings. Adopting this automated approach would involve accepting some potential end-user discomfort, but it would reduce the risk of a security lapse for the entire company. Of course, in some highly regulated contexts (industrial control systems, nuclear facilities, and avionics, for example), automatic implementation of patches may not be an option and may even be prohibited under statutory or administrative regulations.

Occasionally, given the common human propensity to do nothing when that’s an option, nudges must be forceful if they are to change behavior. The higher the stakes, the more crucial the need to eliminate lax behavior.

This is true with regard to everything from utilities, where security breaches can lead to loss of life, to political and cultural institutions, where breaches can erode trust in the bedrock institutions of society. Regardless of context, decision-makers have a choice. They can embrace and drive behavioral change in security, or they can forgo technological innovation that leverages sensitive data for businesses and individuals.

Behavioral change starts at the top and requires strong leadership. A recent study by Cybersecurity Ventures estimates that the global cost of cybercrime could exceed $6 trillion annually by 2021.To grasp the magnitude of that figure, note that the IMF has estimated the total costs associated with the 2007–2008 global financial crisis at about $12 trillion. In other words, every two years cybersecurity will cost the global economy an amount equal to that lost in a financial crisis that many compare with the Great Depression. Yet cybersecurity commands nowhere near the level of awareness and urgency that the financial crisis does.

Unlike with physical security, the first line of security in digital spaces is typically the private sector—in particular, businesses with increasingly valuable digital assets. According to publicly available LinkedIn member data, less than 3% of senior executives in large US businesses have cybersecurity backgrounds, whereas 24% have backgrounds in finance.

Despite their knowledge of finance, many business executives were surprised by the financial crisis, and their companies had to struggle to survive. Imagine how ill prepared those executives may be for an equivalent cybersecurity meltdown.

During the Equifax hearing, Representative Walden memorably asked, “How does this happen when so much is at stake? I don’t think we can pass a law that, excuse me for saying this, fixes stupid. I can’t fix stupid.”

Congressman Walden is right: it is difficult to “fix stupid.” But addressing behavior and the human element of cybersecurity is a good place to start.