" "

Responsible AI 

When done right, responsible AI doesn’t just reduce risk. It also improves the performance of artificial intelligence systems, fostering trust and adoption while generating value. We help companies develop and operationalize a custom-fit framework for responsible AI.

Emerging regulations and generative AI are casting a spotlight on AI technology, but they don’t need to cast a shadow. Our perspective is that responsible AI is more than just risk mitigation, as it also an important value creator. The same mechanisms that reduce AI errors can accelerate innovation, promote differentiation, and elevate customer trust.


What Is Responsible AI? 

Responsible AI is the process of developing and operating artificial intelligence systems that align with organizational purpose and ethical values, achieving transformative business impact. By implementing RAI strategically, companies can resolve complex ethical questions around AI deployments and investments, accelerate innovation, and realize increased value from AI itself. Responsible AI gives leaders the ability to properly manage this powerful emerging technology.

Unlocking Value Through Responsible AI Commitments

The experiences of a hypothetical company illustrate how ensuring responsible AI practices can address challenges and create new opportunities.

Learn more about BCG X

 

How We Help Companies Implement Responsible AI

So far, relatively few companies have unleashed this strategic approach to responsible AI. What’s the holdup? For some organizations, the leap from responsible AI ambition to execution has proved daunting. Others are waiting to see what form regulations take. But responsible AI principles can bring benefits now as they prepare companies for new rules and the latest emerging AI technology.

Our battle-tested framework minimizes the time to RAI maturity while maximizing the value responsible AI can create. Built on five pillars, it is tailored to each organization’s unique starting point, culture.

Responsible AI strategy

AI Governance

Key Processes

Technology and Tools

Culture

Our Clients’ Success in Responsible AI

BCG’s responsible AI consultants have partnered with organizations around the globe in many industry sectors, creating personalized solutions that provide AI transparency and value. Here are some examples of our work.


Our Responsible AI Recognition and Awards

As one of the leading consulting firms on AI governance, we are proud to be recognized for the excellence of our work advancing responsible AI, setting the stage for broader and more transparent use of AI technology.

  • Finalist for Leading Enterprise in the Responsible AI Institute RAISE Awards, 2022. BCG was nominated and shortlisted for the RAISE 2022 Leading Enterprise Award, recognizing organizations leading efforts to narrow the responsible AI implementation gap and create space for critical conversations about the current state of the field.
  • Top 100 Most Influential People in Data, DataIQ, 2022. Steven Mills was named one of the Top 100 Most Influential People in Data by DataIQ for his work in responsible AI and on the unintended harms caused by biased AI systems.
  • Outstanding Achievement in the Field of AI Ethics nominee, CogX Awards, 2021. Steven Mills was nominated for the Outstanding Achievement in the Field of AI Ethics category of the 2021 CogX Awards, which recognizes a company or individual taking action to preserve human values in an ever-changing technological setting.

BCG’s Tools and Solutions for Responsible AI

Our responsible AI consultants can draw on BCG’s global network of industry and technology experts. But they can also call on powerful tools for implementing RAI.

RAI Maturity Assessment

Supported by the data collected in our latest survey with MIT SMR, this proprietary tool benchmarks companies across the five pillars of responsible AI, providing insight into strengths, gaps, and areas for focus.

GAMMA FACET

AI transparency is crucial to building trust and adoption. But it’s often elusive, as AI can be a ‘black box’ that produces results without explaining its decision-making processes. FACET opens the box by helping human operators understand advanced machine learning models.


Three Things to Know Now About Responsible AI

AUGUST 10, 2023—The recent voluntary commitments secured by the White House from core US developers of advanced AI systems—including Google, OpenAI, Amazon, and Meta—is an important first step toward achieving safe, secure, and trustworthy AI. Here are three observations:

  • These voluntary commitments will help to move the AI ecosystem in the right direction. They can be a foundation for putting the Blueprint for an AI Bill of Rights into operation and bringing together more actors under a shared banner to ultimately make responsible AI the norm. The commitments also can trigger greater investment in training, capacity building, and technological solutions.
  • Supportive initiatives like the Frontier Model Forum, which includes the developers who made the voluntary commitments, will enable AI ecosystem stakeholders to exchange knowledge on best practices for responsible AI, particularly for advanced systems. Given that tech companies’ recent layoffs included some trust and safety experts, a renewed commitment to auditing and analysis through public release will help external expertise fill internal capacity shortages. Rigorous documentation in the form of audit reports and disclosures will help agencies such as the Federal Trade Commission protect consumers from deceptive and unfair practices.
  • Notably missing from the commitments is a focus on mitigating the potentially significant environmental impacts of AI systems. Details on when and how these commitments will be operationalized are also needed to boost public trust, especially given the many ethical issues recently raised by AI. Bipartisan legislation would increase the impact of the voluntary commitments.

Abhishek Gupta
Senior Solution Delivery Manager, Responsible AI
Montreal

APRIL 27, 2023—Generative AI continues to evolve with extraordinary speed. Innovators are racing to address the technology’s shortcomings even as new risks continue to emerge, highlighting the need for stronger AI governance. A few developments:

  • While generative AI models are becoming more complex, there have also been significant improvements in safety and alignment. OpenAI’s GPT-4, for instance, has reduced hallucination and is better at avoiding hate speech and controversial topics. It also follows prompts better, creating more accurate navigation to arrive at the user’s desired result. This enables use cases that have a lower tolerance for errors and deviations—for example, these models perform better in convergent-thinking type scenarios, in which there’s usually a single correct response.
  • ChatGPT plugins—a new technological feature—can address some shortcomings of previous models. The Wolfram Alpha plugin, for example, improves mathematical reasoning. Essentially, ChatGPT now has its own computer, allowing it to provide more tailored and grounded answers. Plugins also improve the tool’s utility to certain domains—the accuracy of travel-related information, for instance, will improve with a new Expedia plugin. Hallucinations can also be reduced by using task-specific plugins rather than generalized tools.
  • New risks are being discovered, though. Prompt injection attacks, for example, leverage latent weaknesses for which we don’t yet have good protections. Due to capability overhang—hidden skills that may not even be known to generative AI developers—such risks won’t disappear soon. Investments in responsible AI will be required to help us confidently explore new use cases while enhancing safety.

Abhishek Gupta
Senior Solution Delivery Manager, Responsible AI
Montreal

APRIL 11, 2023—Stanford’s Human-Centered Artificial Intelligence group has issued a timely report that underscores why responsible AI (RAI) practices should be high on CEOs’ agendas. A few of the key takeaways:

  • Misuse of AI is rapidly increasing. The number of AI incidents and controversies annually has risen 26 times since 2012, according to the AI Incident Database, which tracks ethical misuse of AI. The report attributed this growth to both greater AI use and greater public awareness of misuse. This means organizations are at higher risk whether they buy or build AI systems. RAI practices can help identify these risks and fix them.
  • AI is gaining attention from policymakers. The Stanford analysis of legislative records in 127 countries found that 37 bills containing “artificial intelligence” passed in 2022—compared with 1 in 2016. According to an analysis of parliamentary records in 81 countries, mentions of AI in legislative proceedings have increased more than sixfold since 2016. With regulatory pressures likely to grow, companies must actively engage with regulators and other stakeholders to help shape policies that are technologically effective and feasible.
  • Private AI investment is falling. Annual global private investment in AI dropped by 27% in 2022, to $91.9 billion. That was the first decrease in a decade. The number of AI-related funding events and newly funded AI companies also declined. Limited funding means teams must demonstrate that AI investments generate returns. By helping mitigate failures early, RAI can help boost returns across the AI lifecycle.

Abhishek Gupta
Senior Solution Delivery Manager, Responsible AI
Montreal

MARCH 23, 2023—There’s lots of talk about the risks of generative AI. What’s not getting discussed enough is how organizations are adopting itNow that anyone can use tools like ChatGPT in their workflows, unreported “shadow AI” may be taking place that bypasses risk and safety governance mechanisms.

Instead of banning generative AI systems, which is not a long-term solution, here are three things you can do to support experimentation while promoting safe and responsible AI.:

  • Install a front door. All requests for using generative AI systems should go through an API gateway that provides access. This front door helps ensure responsible AI by tracking how it is used, filtering and sanitizing sensitive information, and blocking toxic inputs or outputs. 
  • Establish enforceable policies. Develop guidelines on using generative AI and capture them in policy to make them enforceable. Communicate those policies to all staff and establish a rapid and agile review process for use cases. This will enable people experimenting with generative AI to bring cases to a committee, ask questions, and get feedback. 
  • Purchase enterprise licenses for generative AI systems. Rather than using a generic system without knowing what it does with the data, license tools and services offered by companies like OpenAI, Microsoft Azure, Cohere, and Hugging Face that enable organizations to use generative AI in a structured manner. This can protect your data and give you the ability to fine-tune the systems to meet specific needs.

Abhishek Gupta
Senior Solution Delivery Manager, Responsible AI
Montreal

Our Insights on Responsible AI 

BCG’s AI Code of Conduct

At BCG, we lead with integrity—and the responsible use of artificial intelligence is fundamental to our approach. We aim to set an ethical standard for AI in our industry, and we empower our clients to make the right economic and ethical decisions.

See how we’re fulfilling this commitment

 

Meet Our AI Ethics Consulting Team 

BCG’s responsible AI consultants are thought leaders who are also team leaders, working on the ground with clients to accelerate the responsible AI journey. Here are some of our experts on the topic.

Tech + Us: Monthly insights for harnessing the full potential of AI and tech.