Saved To My Saved Content

As companies race to unleash AI, they’re also racing to build responsible AI (RAI) programs. The trouble is, many are chasing the label, not the substance.

We’ve seen watershed events since the first joint global responsible AI survey by BCG and MIT Sloan Management Review (MIT SMR) in 2022. ChatGPT ignited the age of generative AI, bringing new potential—and new types of risk. Governments around the world have begun rolling out AI regulations, from the landmark EU AI Act to frameworks now taking effect in Brazil, South Korea, and other countries. And just in case things could ever get boring, agentic AI—AI that can autonomously perform complex tasks—has emerged.

One might think that any of these shifts would spur organizations to turn up the dial on RAI, which can help ensure that their AI systems are proficient, safe, secure, and compliant and that systemic organizational risks are managed. Yet findings from our most recent RAI survey, completed in 2025, don’t show that happening.

Organizations may be embracing RAI, but most are deploying it in a surface-level manner.

To be sure, more companies are doing something. The new survey found that 85% of respondents are implementing RAI—a big jump from the 52% that had a program in place in 2022. Yet the proportion of companies that have fully mature frameworks has only increased slightly, from 16% to 25%. Organizations may be embracing RAI, but most are deploying it in a surface-level manner.

The 2025 survey, which included 1,221 respondents representing 70 countries and 29 industries, found that many organizations are prioritizing the rapid development and scaling of RAI basics, such as policies and training. But they’re giving far less attention to creating a deep technical foundation for RAI. Elements like testing and evaluation are often underdeveloped—or not developed at all.

Surface-level RAI leaves much of AI’s potential on the table.

It makes sense that organizations would begin with policy and basic training. They’re logical starting points, and regulations typically require them. But they shouldn’t be ending points. Companies need to operationalize their policies and practices—to make them count, make them work—through robust mechanisms and tools. Insufficiently mature RAI programs leave organizations vulnerable to errors, biases, and missteps that increase a range of risks: financial, reputational, legal, and regulatory. And surface-level RAI also leaves much of AI’s potential on the table. Research by BCG and MIT SMR has found that RAI leaders are twice as likely as non-leaders to realize business benefits from their RAI efforts.

Weekly Insights Subscription

Stay ahead with BCG insights on artificial intelligence

Speed over Substance

The survey, which looked at how organizations are defining and implementing RAI, finds that most RAI programs are relatively new, emerging only after GenAI went mainstream in late 2022. Nearly a third of the initiatives were less than a year old at the time of the 2025 survey; another 40% were between one and three years old.

GenAI and RAI are closely intertwined. GenAI systems are complex and produce nondeterministic results, meaning the same input can spark varying outputs. Moreover, GenAI relies on foundational models, which can be used for a wide range of tasks, many of them not anticipated during a model’s development. This combination of complexity, unpredictability, and versatility ratchets up the risks—and the need to boost trust, transparency, and resilience. Not surprisingly, more than two-thirds of respondents (70%) said that RAI was already a strategic priority or was regularly discussed at senior leadership meetings.

But what is surprising, given the EU AI Act and other regulations, is where the real push for RAI is coming from. The prevailing wisdom has held that comprehensive AI regulation—aimed at emerging risks and backed by substantial penalties for noncompliance—would be the primary driver of action. Yet the survey finds the strongest push is coming from closer to home: internal sources, such as corporate boards and customer feedback. When asked who is primarily holding their organization accountable for its use of AI, 87% of respondents cited internal entities, while just 7.5% pointed to external regulators. One explanation may be the especially visible nature of GenAI failures, which makes the risks unmistakably real and visceral. The AI Incident Database, which tracks harms and near harms linked to AI systems, reported a 51% year-over-year increase in incidents in 2025. And those are only the publicized examples; the true scale of failures is likely even greater.

For many organizations, the pressure to act results in speed over depth. Nearly a quarter of companies—what we call the “scale first” group—are prioritizing fast rollouts, focusing on basic RAI elements such as policies and training, rather than a more systematic RAI program. (See the exhibit.)

Nearly half of companies have yet to embrace the more substantive and crucial elements of responsible AI

Systematic RAI programs incorporate governance, monitoring, tools, and change management. This is the harder stuff, but it operationalizes RAI, making it essential. Nearly a third (30%) of companies seem to agree. These organizations are pursuing a depth-and-breadth-first strategy, investing in a deeper, technically rigorous foundation for RAI—even if it means moving more slowly to enterprise scale. But 30% isn’t nearly enough.

To achieve meaningful RAI, companies must move beyond superficial measures and invest in comprehensive frameworks.

A Broader, Brawnier, Better RAI

To achieve meaningful RAI, companies must move beyond superficial measures and invest in comprehensive frameworks. Six essential elements underpin a mature and effective RAI strategy.

Inventory and Risk Assessment. Get the lay of the land. Understanding which AI systems you’re using and how they impact the organization is crucial. Start by thoroughly cataloging each AI application, detailing its purpose and usage, the people and processes it touches, the data it handles, and potential risks. With a clear map in hand, organizations are better equipped to anticipate and mitigate perils. This helps ensure that systems are aligned with current and future regulations and policies.

Robust Governance. Establish clear rules of the road. Effective RAI requires structures that everyone in the organization understands and embraces. Define policies, assign responsibilities, and set up transparent reporting channels. One best practice is to identify a senior executive to lead RAI implementation, coordinating a cross-functional team that spans risk, compliance, ethics, legal, and technical functions, as well as individual business units. But bringing functions together means more than getting everyone on the same Zoom call. It also requires upskilling, so all sides share an understanding of the technology, risks, and regulatory expectations. This can help address a key obstacle to effective governance: functions such as legal and compliance often think in regulatory terms, while technical teams focus on models and algorithms. By championing these changes, leaders ensure that RAI becomes an integral part of the organizational culture rather than a peripheral concern.

Risk-Differentiated Review Processes. Make RAI efficient. RAI and risk teams are often perceived as slowing innovation, and lengthy or cumbersome oversight only reinforces that view. A risk-differentiated approach helps address this challenge, allowing low-risk uses to move quickly through streamlined review while concentrating time, expertise, and safeguards on AI systems that pose greater ethical, operational, or societal risks.

Responsible Software Development Life Cycle Controls. Show, don’t tell. RAI isn’t just about policies—it’s about practices embedded in every step of AI development. Implement robust controls across the software life cycle, from initial concept through deployment and beyond. Consider, too, how to build controls into the code of agentic systems, making them an integral part of the actual solution. Regularly assess data quality, conduct rigorous technical tests, ensure regulatory compliance, and provide structured, ongoing user training. Feedback loops are key, helping companies identify and promptly act upon emerging issues.

Test and Evaluation (T&E). Marry machines with human judgement. GenAI’s inherent complexity and unpredictability means that traditional testing methods won’t cut it. Organizations have an array of new tools at their disposal, such BCG X’s GenAI Evaluator and ARTKIT, an open-source, automated red-teaming and testing toolkit that helps companies scale their T&E processes. But tools alone aren’t enough. Organizations that lead on RAI combine advanced automated tests with smartly designed human oversight. While automated systems can flag potential anomalies, human reviewers can critically evaluate the context, intentions, and subtleties of GenAI outputs.

Monitoring and Response Planning. Be vigilant. RAI is not a one-and-done task. Given the nondeterministic nature of GenAI output, residual risks will always remain, even with robust T&E. Continuously monitor systems—and complement that watchfulness with processes that encourage and facilitate reporting. This lets you catch and mitigate problems before they escalate. Also critical: establishing clear communication strategies for swiftly addressing any issues that arise. Effective response plans, including rapid stakeholder notification, corrective actions, and resilience measures, ensure that your business remains operational even when AI systems encounter challenges.


By scaling surface-level RAI, some companies are stressing quick wins over real resilience. But in the GenAI era, sustainable success—and enduring trust—calls for a different approach. RAI programs must evolve from principle-based beginnings into deeply embedded, practical, and mature frameworks. Organizations that make the investment will be best positioned to scale their GenAI initiatives effectively and safely. They’ll minimize risks like disruption and backlash, fuel opportunities for innovation and growth, and help GenAI live up to its billing.