Responsible AI Is About More than Avoiding Risk | rectangle

Related Expertise: Artificial Intelligence, Generative AI

Responsible AI Is About More Than Avoiding Risk

By Steven MillsSean SingerAbhishek GuptaFranz GravenhorstFrançois Candelon, and Tom Porter

As artificial intelligence (AI) becomes more powerful, companies must ensure that they use the technology appropriately. Enter responsible AI (RAI)—a way to avoid AI failures that put individuals and communities at risk. But RAI is hardly just a preventive strategy. A new study by BCG and MIT Sloan Management Review finds that it can also advance key business goals. Some key points from the study:
  • Companies that lead on RAI look beyond risk mitigation, treating the approach as an enabler that can bring tangible benefits to the business and support long-term goals such as corporate social responsibility.
  • Half of RAI leaders—companies that possess fully mature RAI programs—report that they have developed better products and services as a result of their RAI efforts (only 19% if nonleaders say the same). And 43% of RAI leaders cite accelerated innovation (versus just 17% of nonleaders).
  • Timing counts. Companies that scale their RAI program before scaling their AI capabilities encounter nearly 30% fewer AI failures. And the failures that they do experience tend to reveal themselves sooner.
By taking a broader, more strategic approach to RAI, companies in the vanguard aren’t merely reducing risk. They’re also generating value—for the organization and the world around it.

Artificial intelligence has made great strides in recent years—but sometimes it also makes headlines. Biased hiring, unfair lending practices, incorrect medical diagnoses: AI failures happen, and in some instances they’ve had a significant, even life-altering impact on individuals and communities. Little wonder, then, that businesses see responsible AI (RAI) as a way to mitigate the risks of AI. But is this the best or most beneficial way to view RAI? Our global survey of more than 1,000 executives reveals that it’s not.

As AI becomes more powerful and more prevalent, companies must ensure that they use the technology appropriately. But those that lead in responsible AI, the survey finds, look beyond risk mitigation. They treat RAI as an enabler that can bring tangible benefits to the business, support broader long-term goals such as corporate social responsibility (CSR), and help derive even more value from AI investments.

Today, leaders in responsible AI are few and far between. Indeed, although most companies recognize the importance of RAI, the survey finds a large gap between aspirations and action. An overwhelming majority of respondents—84%—say that RAI should be a top management priority. Yet just 16% of companies have fully mature RAI programs.

These responsible AI leaders see clear business benefits from RAI, in addition to the societal value of minimizing risk for individuals and communities. Their experiences provide key insights and a high-level roadmap for the many organizations that are still just dipping their toes into RAI—or standing to the side of the pool.

Responsible AI Isn’t Just About Reducing Risk

Risk mitigation has long been RAI’s raison d’être. The thinking: a technology like AI—with so much potential influence on operations, customer interactions, and product functionality—requires special precautions. And there’s plenty to mitigate. Nearly a quarter of participants in the survey report AI failures, ranging from technical glitches to bias in decision making and actions that raise privacy or safety concerns. Given that so many organizations have yet to adopt RAI (and may not even know whether they’ve had an AI failure), the actual number of such incidents is likely even higher. But responsible AI leaders have demonstrated that they can reduce risk and gain other important benefits by approaching RAI in a more strategic way.

Leaders share certain characteristics. They prioritize RAI, include a wide array of stakeholders (within and outside the organization) in its implementation, and make a firm material and philosophical commitment to RAI. Moreover, responsible AI is itself a source of business value for these organizations.

What kind of business value? Half of responsible AI leaders report having developed better products and services as a result of their RAI efforts (only 19% of nonleaders say the same). Nearly as many—48%—say that RAI has enhanced brand differentiation (compared to 14% of nonleaders). And 43% of leaders cite accelerated innovation (versus just 17% of nonleaders). Overall, leaders are twice as likely as nonleaders to realize business benefits from their responsible AI efforts.

RAI Comes First

But the survey also makes clear that timing—specifically, the sequence in which companies increase their RAI maturity and their AI maturity—counts, too. For many companies, AI maturity comes first: 42% of respondents say that AI is a top strategic priority, but among this group, only 19% say that their organization has a fully implemented responsible AI program. This, it turns out, is the wrong order.

As AI maturity grows, a company will deploy more AI applications, and those applications will be more complex, increasing the risk that something may go wrong. Companies that prioritize scaling their RAI program over scaling their AI capabilities experience nearly 30% fewer AI failures. And the failures they do have tend to reveal themselves sooner and have significantly less impact on the business and the communities it serves. Focusing first on responsible AI enables companies to create and leverage a powerful synergy between AI and RAI.

Connecting RAI and CSR

Most companies in the vanguard of RAI view it as part of their broader CSR efforts: 73% connect RAI and CSR, compared to 35% of nonleaders. For responsible AI leaders, the alignment is natural. Both efforts have many goals in common—including transparency, fairness, and bias prevention—so they can support and empower each other. Indeed, the survey found that as their responsible AI maturity increases, organizations become more interested in aligning their AI with their values and take a broad, societal view of the impacts of AI on stakeholders. Once again, responsible AI is not just about reducing risk. It’s also about creating value, for company and community alike.

Now’s the Time to Get Started

Responsible AI leaders don’t limit themselves to thinking about risks and regulations. They also consider how RAI can advance their business goals and principles and how it can affect a broad array of stakeholders. Crucially, they’ve already started charting a path that others can follow.

Taking that path is important. Many companies are investing heavily in AI without committing corresponding resources or efforts to RAI. That leaves risk on the horizon and value on the table. By embracing a strategic approach to responsible AI, companies can more easily and less perilously scale their AI efforts—realizing important benefits and doing bigger, better things for their business and the world around it.


BCG GAMMA is BCG’s global team dedicated to applying artificial intelligence and advanced analytics to business at leading companies and organizations. The team includes 800-plus data scientists and engineers who apply AI and advanced analytics expertise (e.g., machine learning, deep learning, optimization, simulation, text and image analytics) to build solutions that transform business performance. BCG GAMMA’s approach builds value and competitive advantage at the intersection of data science, technology, people, business expertise, processes and ways of working. For more information, please visit our web page.

Subscribe to our Artificial Intelligence E-Alert.