Partner & Associate Director
It’s no secret that big data offers enormous potential for businesses. Every C-suite on the planet understands the promise. Less understood—much less put into practice—are the steps that companies must take in order to realize that potential. For all their justifiable enthusiasm about big data, too many businesses risk leaving its vast potential on the table—or, worse, ceding it to competitors.
Big data has brought game-changing shifts to the way data is acquired, analyzed, stored, and used. Solutions can be more flexible, more scalable, and more cost-effective than ever before. Instead of building one-off systems designed to address specific problems for specific business units, companies can create a common platform leveraged in different ways by different parts of the business. And all kinds of data—structured and unstructured, internal and external—can be incorporated.
Yet big data also requires a great deal of change. Businesses will have to rethink how they access and safeguard information, how they interact with consumers holding vital data, how they leverage new skills and technologies. They’ll have to embrace new partnerships, new organization structures, and even new mind-sets. For many companies, the challenge of big data will seem as outsized as the payoff. But it doesn’t have to be.
In engagements with clients of The Boston Consulting Group, we’ve found it helpful to break down big data into three core components: data usage, the data engine, and the data ecosystem. For each of these areas, two key capabilities have proved essential. (See Exhibit 1.) By developing the resulting six capabilities, today’s businesses can put in place a solid framework for enabling—and succeeding with—big data:
In a world where information moves fast, businesses that are quick to see, and pursue, the new ways to work with data are the ones that will get ahead and stay ahead.
Big data will drive value in a variety of ways. (See “Opportunity Unlocked: Big Data’s Five Routes to Value,” BCG article, September 2013.) But the most innovative—and potentially most lucrative—opportunities will likely not be readily apparent. Businesses need to create an environment in which novel applications—ideas that truly differentiate a company from its competitors—can be quickly identified and developed. A culture where experimentation and outside-the-box solutions are encouraged is crucial. So, too, is a wide range of talents, from data science skills to business expertise. While it may seem a formidable challenge, creating an effective data-driven ideation process is not quite as difficult as companies may think.
The exploration of new data applications should be encouraged at all levels of the organization, with employees given time and resources to pursue their ideas. Experimentation should not be boundless: it needs to start with, and center on, a business problem. At one large automobile manufacturer, for example, a special group was established to develop innovative uses for the data now routinely collected and transmitted by in-car sensors. Such an initiative sends a clear message to employees that new, creative solutions aren’t just welcome, they are a company priority.
The wide range of expertise needed to identify and develop applications—in data science and analytics, new technologies, and business—will rarely be possessed by a single individual. Indeed, efforts will often require the skills of many individuals, located across the company. This makes it vital to create strong links between professionals who likely have very different backgrounds and very little experience working with one another. Frequent dialogue and ongoing collaboration will help these interdisciplinary teams zero in on and prioritize the most relevant business problems and opportunities. Formal processes can spur this kind of collaboration, as can a more informal “push from the top.”
Speed and agility are crucial in creating big-data applications. Short cycles, iterative development, and frequent pilots should be the rule. Risk taking should be encouraged; mistakes, accepted. Big data is still largely uncharted ground and even disappointment—or at least, carefully analyzed disappointment—can be a good teacher.
Access to information—much of it personal in nature—is essential to extracting value from big-data applications. Yet individuals are increasingly concerned about how, exactly, their information will be used. As part of its 2013 Global Consumer Sentiment Survey, BCG polled nearly 10,000 consumers, from both developed and developing countries, on trust. Just 7 percent of respondents said they were comfortable with their data being used beyond the purpose for which it was gathered.
By using data responsibly, and being clear and transparent about those uses, businesses can go a long way toward reducing consumer worries and skepticism. And they can gain an important competitive edge. The companies that do the best job instilling trust will have the most success acquiring and using sensitive data. They’ll get the access that less open and less forthcoming companies won’t. BCG calls this the “trust advantage” and estimates that businesses that manage trust well will be rewarded with five to ten times more access in most countries. (See The Trust Advantage: How to Win with Big Data, BCG Focus, November 2013.)
Don’t get bogged down in boilerplate. The language explaining how personal data is used should be clear and concise, easy to follow, and even lively in tone. It should be visible, too—prominently placed, not buried at the bottom of a Web page. It is also important to articulate what will not be done with the data (such as sharing it with partners or social media sites).
Avoid a one-size-fits-all approach to permissions. Instead of a broad opt-in choice that allows all uses or a broad opt-out choice that prohibits everything, let individuals choose the specific uses they will allow or prohibit. This gives them greater control over how their data is used—which can tip the scales when they are deciding whether or not to share information.
The success of sites like Facebook and Google demonstrates that users will often share personal data if they receive something valuable in return. By articulating what there is to gain—enhanced features, improved products, useful advertising, and so on—businesses make it clear that this is a two-way street. By sharing their information, individuals will reap compelling benefits.
Businesses and data go way back—but that history can often work against companies. Their experience tells them that the IT infrastructure must be massive, rigid, and expensive; made up of complex systems customized for a particular task; and fueled by painstakingly cleansed data. Yet big data is, in fact, a very different experience, with different technologies, requirements, and possibilities. If businesses are to fully exploit the opportunities, quickly and cost-effectively, they need to understand how IT has changed. And they need to develop their own data platforms accordingly.
The traditional data infrastructure, which relies on centralized warehouses of highly structured data, is no longer the only option. (See Exhibit 2.) Many of the new tools (such as those based on Apache Hadoop, an open-source framework that lets applications leverage distributed data on commodity hardware) are more flexible and far less expensive. Analytical IT can now often be quickly implemented, too. In client engagements, BCG has helped deploy technologies ranging from Hadoop to Amazon Web Services to SAP HANA in less than eight weeks.
These new data tools hold extraordinary potential, but they also raise questions: What happens to existing investments? How are the insights gleaned through cutting-edge data analysis put into operation? And perhaps the most important question of all: How can the technical foundation that companies lay today support the data applications of tomorrow? Flexibility will be crucial, not just for speed and efficiency but also for competitive advantage. To gain and keep an edge, businesses will need to rapidly deploy new data uses without rapidly running up costs.
In our case work, we’ve found the following guidance helpful for building the optimal platform for big data.
Implementing an enterprise-wide platform helps avoid the “data anarchy” problem, where different business units rely on duplicated or conflicting data sources. When everyone leverages a single “reference source,” data consistency is maintained. This platform should be built from easily scalable technologies, which will make it easier to implement future applications. Here, distributed data tools like Hadoop have an edge over more traditional SQL-based tools, because they can work with information in its natural, unstructured form, wherever it may reside.
What sometimes gets lost in the discussion of big data is the fact that the technical foundation has two parts: the technology that supports the analytics and the technology that puts the results to use. That second part is crucial: although big data can return all manner of valuable insights, those insights won’t mean much if they’re not leveraged in a timely fashion—increasingly, in real time or near real time. For example, an online retailer might come up with the optimal individualized offer for a customer visiting its website, but to make the most of that insight, it needs to convey the offer while that customer is still on the site. For many companies, “operationalizing” big data will mean implementing new and unfamiliar technologies. But the companies that can create the necessary processes will be the ones that put their analytics to the best and most profitable use.
The most successful big-data platforms will leverage not only new technologies but also new organization structures. Centralizing key resources (data scientists and analysts, for example) in a stand-alone unit will help businesses attract and retain the talent they need, develop and manage applications efficiently, and spur innovation but not duplication. (See “Two-Speed IT: A Linchpin for Success in a Digitized World,” BCG article, August 2012.) Yet at the same time, companies need to avoid ivory towers. New data-science and data-mining capabilities must be linked back to, and aligned with, existing businesses. That keeps the focus on valuable, real-world use cases—not flights of fancy.
Of course, business units need to feel comfortable with these new dynamics. One approach we’ve found effective is to focus initially on specific “pain points.” By targeting a key problem and teaming up to resolve it, data specialists and business experts not only learn to work together effectively but develop the links and trust necessary to create the most relevant applications. It makes big data “real” for the business unit, and it gets their attention and their buy-in.
As companies develop their new datacentric organization, they should prioritize three essential activities.
Businesses are likely to find that the skills required for big-data projects—from designing the analytics algorithms to running the technical platform—are in short supply. A center of excellence enables expertise to be built up quickly, as a core of talent is exposed to a variety of problems and solutions. Just as important, it promotes the cross-fertilization of ideas. Best practices spread within the organization. Successful approaches are replicated by other parts of the company. The risk of duplicative efforts—and the data anarchy that too often comes with them—is greatly reduced.
Big data needs a champion, a dynamic senior executive with a reputation for getting things done. Whether this is a newly appointed position (perhaps a chief data officer) or is simply the CIO taking the lead, the role is the same: to demonstrate a clear, visible commitment to making big data work and ensuring that all the capabilities and accountabilities are in place. This individual will also work to ensure proper data governance and management. Champions within individual business units are important as well, because they strengthen the link back to the business. This is another reason we recommend starting with top-of-mind pain points. Doing so helps to gain the confidence and support of a unit’s leadership.
New skill sets will likely be required, and the professionals possessing them may be used to working in nontraditional environments. It’s not just an issue of wearing suits or jeans. They may have completely different expectations about how the job gets done. Technology experts coming from small, entrepreneurial start-ups, for instance, may be used to rapid development cycles and working with great autonomy. Transplanting them into a more bureaucratic, process-driven environment, where things move more slowly and there are layers of oversight, can quickly decimate their morale and effectiveness.
Avoiding this cultural mismatch isn’t easy: you don’t want to ignore it, but at the same time, you don’t want to give your new employees privileges your veterans don’t get (something that can create hard feelings and hinder collaboration). A good starting point is to have an ongoing dialogue with the experts you bring in, making sure they are given challenging problems and working with them to provide the tools they need to solve them. This helps not only to meet expectations but also to manage them.
Big data is transforming not just how companies do business but also the types of people and organizations with which they do it. (See “The Age of Digital Ecosystems: Thriving in a World of Big Data,” BCG article, July 2013.) New data applications will often blur industry boundaries, creating a need for partnerships. Some companies, meanwhile, will possess information of great value to others, spurring new commerce and new revenue streams. Technology providers will play an increasingly visible and influential role, too, given that they will create and control the technical standards. All of these trends make alliances more a given than an option.
Yet while going it alone may mean leaving opportunities—and value—on the table, partnering with others raises more questions that need to be answered: Should a company be a data giver or a data taker? How can it add value to nascent data applications in other industries? What external assets and expertise does it need in order to develop its own applications? Identifying where a business fits within a data ecosystem is rarely straightforward. But we’ve found that by taking three steps, companies will position themselves to home in on and successfully leverage the right data alliances.
Take a careful look at existing products and services: What data do they generate? What additional data could enhance them? How can they drive new or improved offerings in other sectors? Insurers, for example, have found that information collected by automobile manufacturers, through in-car devices and sensors, lets them link premiums to actual driving habits. The result: a new model for calculating rates, one that many drivers (at least the good ones) will prefer, given that safe driving will lower insurance costs. Business need to think broadly and identify where in the “stack” they might add value.
In a successful data alliance, partners provide complementary resources and expertise: the data, capabilities, and assets that, combined, make it possible to exploit new business opportunities. Beyond the buyers and sellers of data are analytics services providers, which can comb a company’s data for insight, and “data enablers,” which are companies that provide guidance and solutions to help a business get its big-data initiatives off the ground. Companies need to examine their own goals and requirements and identify the players that can help to meet them.
Whether a company is working alone or with partners, an iterative, exploratory approach to big data beats a detailed three-year strategy. Take small, quick steps to test demand, then learn from results—and mistakes—to adapt offerings. When something works, rapidly accelerate its deployment.
The partnerships that big data sparks must be managed and maintained. Contract terms should be constructed so that everyone can prosper—and has an incentive to exchange complementary information. Technical platforms should allow partners’ data to be quickly incorporated and leveraged. The goal isn’t just success but ongoing success, continually improving and expanding upon joint efforts. The following steps can help ensure that relationships stay the course.
Most organizations are used to creating things on their own and enjoying full control of their initiatives. Data ecosystems change that, with multiple companies working together to bring new products and services to customers. This requires much stronger management skills, but it also means that incentives should be aligned among partners.
Impose restrictive contract terms on partners and some valuable allies may walk. But give up too much—to gain a foothold, perhaps, in a new market—and risk needlessly shrinking the potential profit. Understanding the economic opportunities, and where each partner adds value, can keep contracts fair and all sides satisfied. Implementing performance KPIs can then track which partners are or are not carrying their weight.
Ecosystem partners will need to share data quickly and easily. A company, then, must often enable third-party access to its data platforms. To reduce the technical challenges of providing these links—and the time that is needed to resolve those challenges—interfaces should be easy to change and test. To allay concerns about security and confidentiality, access should be tailored to the need, providing neither more nor less than what is necessary.
To master big data, businesses will have to put aside much of what they know about working with data. They’ll have to adopt new mind-sets, new technologies, and new capabilities. And they’ll have to do so quickly, because big data doesn’t just present opportunities. It also presents risks. Traditional companies may fast find themselves vulnerable to new players and market entrants that excel at these capabilities. Many of the changes companies must invest in will be unfamiliar—they may, in fact, be radical departures from how companies are accustomed to operating. It’s a tall order, to be sure. But by building the six capabilities outlined above, companies can realize the full potential of big data—faster than they might think, and faster than the competition.
The authors would like to thank Astrid Blumstengel, Julia Booth, David Ritter, and John Rose for their contributions. They also thank Katherine Andrews, Mickey Butts, Gary Callahan, Alan Cohen, Catherine Cuddihee, Kim Friedman, Abby Garland, Jessica Melanson, and Sara Strassenreiter for their writing, editing, and production support.