Governments and educators face a huge mission: prepare youth and adult learners for the jobs of the future—a future that will unquestionably be shaped by the rapid development and adoption of emerging technologies such as AI. But the biggest constraint is not technology. A lack of institutional alignment across ministries, education providers, employers, and funders is slowing the translation of AI strategy into learning and economic outcomes at scale.
That is the central finding from BCG’s work supporting the inaugural Bett Ministerial Symposium in January 2026, where we discussed the evolution of teaching and learning in the age of AI with education ministers, policy influencers, and edtech innovators from 54 countries. Our agenda focused on a pair of foundational questions: what will drive successful digital transformation in education and workforce development, and how can we address and avoid the most common barriers to effective implementation?
The private sector has made its position on the future of AI in the workplace clear, as corporations plan to double their AI spending this year and elevate AI strategy to a top C-suite priority. With shifting labor market requirements and heightened expectations, historically slower-to-move education systems are now under pressure to act on a similar scale.
This report explores the challenges facing these systems and the solutions playing out around the world. The case study sidebars that appear throughout this report consist of information and insights drawn from our interviews, from BCG research, and from online sources. We also highlight some direct quotes in the pages that follow.
The stakes are high. Education is the primary long-term enabler of countries’ economic, social, and political development. However, technological changes are outpacing institutions' capacity to respond.
What separates the leaders from the rest is disciplined progress on four fronts: setting a clear, industry-connected ambition; investing in people; dismantling structural barriers to adoption; and scaling innovation beyond pilots.
Stay ahead with BCG insights on education
What We Heard at Bett—and Worldwide
Countries vary significantly in their approach to both the “what” and the “how”: what they prepare learners to do and how they alter practices around AI and tech-enabled learning.
Everyone is doing something to address teaching methods—the “how”—such as by pushing out new tools to educators, changing how teachers assess students, and funding experimentation. But the consistency and boldness of these efforts are far from uniform. Notably, an education system’s engagement with new technologies reflects the country’s culture, governance, and attitude toward digital adoption.
We also observed that countries at the forefront of innovation in tech education are clearer than others about what they teach and why. They connect teaching principles to national ambitions, such as the aspiration to be a global leader in quantum computing or information technology.
On the basis of our conversations at Bett and our work with education systems worldwide, we have defined four broad stages of AI-in-education maturity:
- Foundational. Systems are investing in basic digital infrastructure, such as connectivity, devices, and introductory AI literacy. Policy frameworks are nascent. Experimentation is limited and largely uncoordinated.
- Experimenting. Pilots are underway across classrooms, districts, or institutions. Early results are promising but isolated. Systems are still defining governance and funding models. Scaling remains elusive.
- Scaling. National frameworks are in place. Educator capability-building is systematic. Evidence of impact is accumulating, though consistency across regions and learner groups remains a challenge.
- Systemic. Systems have embedded AI across the full learning life cycle, from early education through workforce development, and they link funding models to outcomes. Public-private alignment is strong. The system learns and adapts continuously.
Most countries represented at Bett sit in the foundational or early experimenting stages. Only a few—including Estonia, Singapore, Kazakhstan, and South Korea—are making credible progress toward scaling. The case studies that follow illustrate what meaningful progress looks like at different stages, not as a linear path, but as a set of hard-won advances that others can learn from.
What to Teach About Tech Transformation
Education must keep up with the technological evolution occurring, in order to ensure a resilient tomorrow.
As technology changes, durable human skills become more critical. A clear consensus emerged across the countries represented at the Bett Ministerial Symposium: the most future-ready learners are not those with the deepest technical skills, but those that combine foundational AI literacy with distinctly human capabilities that technology cannot replicate, such as critical thinking, resilience, communication, curiosity, and ethical judgment. The goal is to equip students with the skill sets and mindsets they will need in a human-led, AI-amplified world, where new technologies emerge constantly.
The education model should be redesigned to teach our students not only hard skills, but life skills. We need to equip them with the mindset, resilience, and adaptability required to thrive in this fast-changing world.
Education needs to focus on the application of skills. The private sector has an essential role in curriculum design, not as a downstream consumer of graduates, but as an upstream co-designer of the skills pipeline. (See “Connecting Instruction to Industry.”) Working together, stakeholders across the education ecosystem can identify the competencies that industry and society will need, create structured opportunities for learners to apply them, and incorporate mechanisms to anticipate how demand will evolve. Although education leaders are eager to activate public-private partnerships, the engagement of employers and technology partners in the design process remains uneven.
Connecting Instruction to Industry
Since 1998, the program has reached more than 30,000 students, and today it engages 70 schools in six European countries annually. Its AI-focused workshops provide clear guardrails, responsible AI workflows, structured approaches to feedback, and hands-on application formats. The experience shows that sustainable AI adoption depends not only on technology, but also on long-term partnerships and trusted collaboration between schools and industry.
How to Teach Technology at Scale
Education systems are being asked to do more than ever, with fewer teachers, shrinking budgets, and accelerating skills demand. Capacity is limited, yet scaling is critical.
AI adoption is more than a tool rollout. At the Bett UK 2026 edtech exposition, we encountered a vast market of available tools and applications: productivity ecosystems, assessment platforms, classroom workflow modules, assistive technology, and immersive learning environments. The abundance is both exciting and overwhelming. Practitioners consistently called for clearer guidance on what works at what level of implementation risk in their specific context.
Systems will have to adapt end-to-end. Pedagogy is just one dynamic component in this AI era, and systems that treat it as the only one will fall short. Assessment practices, governance frameworks, learner accessibility, educator capability, and data infrastructure all need to evolve in parallel, within a framework where safety and accessibility are non-negotiable design principles. National leaders may consider not just how planners will build top-down frameworks, but how educators who are stretched, skeptical, or under-resourced will implement them. (See “Redesigning Education End-to-End for the AI Era: South Korea.”)
Redesigning Education End-to-End for the AI Era: South Korea
To achieve this shift, South Korea approaches digital education not as a simple technology rollout but as a comprehensive system transformation. Part of this vision involves integrating AI capabilities into assessment practices, achievement analysis, administrative automation, and curriculum development.
A national ecosystem coordinates implementation. The Ministry of Education sets strategy and policy, and sibling agencies handle ancillary tasks. The Korea Education and Research Information Service (KERIS) manages digital infrastructure and education data systems; the Korea Institute for Curriculum and Evaluation (KICE) develops curriculum and assessment standards; and the Korean Educational Development Institute (KEDI) conducts research and evaluation. Together with regional education offices, KERIS, KICE, and KEDI align through joint policy planning, pilot projects, field feedback, and continuous performance reviews. Collaboration with teachers is just as important.
Putting AI in the Right National Context
Each country operates within its own cultural context, but the end goal is shared: to prepare students and the workforce for a changing world.
Where a country sits on the AI education maturity spectrum and how it progresses are inseparable from its governance model, cultural context, and digital history. There is no single right path.
The starting point matters less than the direction of travel. Countries come to the AI table with different priorities, approaches to guiding citizens, and levels of digital maturity. Some nations are investing to upgrade foundational technical infrastructure in schools; others are already at the forefront of digital innovation and platform interoperability. Some favor consistency in implementation and prescribe technologies for educators to use; others focus on offering a guiding framework and encouraging individual experimentation.
But whatever their context may be, leaders told us that inaction is not an option. Even as they navigate different cultural milieus, they encourage swift decision making to keep pace with the relentless progress of technology.
Evidence building is the compass. With such rapid change, longitudinal evidence regarding which innovations actually work is still emerging—and leaders, who have been flying blind on this crucial question, are hungry for information. Stakeholders told us that they want to invest in rapid-cycle measurement to drive decision making. In addition, they want to find new ways to fund fit-for-purpose investments and curate global communities of practice to share implementation nuances.
Different governance structures—from centralized systems to teacher-led experimentation—shape how countries pursue AI-enabled education. (See “Using Context to Drive Strategy: New Brunswick (Canada) and France.”)
Using Context to Drive Strategy: New Brunswick (Canada) and France
New Brunswick
Education in Canada is a provincial responsibility. In New Brunswick, teachers have considerable autonomy to experiment with AI tools, test what works in their classrooms, and adapt practices to their students’ needs. The province co-developed its AI framework with educators, and it allows districts to set their own standards within clear guardrails. This approach recognizes that, if government prescribes specific tools, educators may resist or comply superficially. Accordingly, the approach avoids top-down mandates that might slow adoption.
France
In contrast, France has deployed a top-down AI strategy that reflects the country’s highly centralized education system, in which inclusion is a core principle. National policies apply uniformly across all schools, enabling the education ministry to promote AI literacy for students via a prescribed platform. A national AI usage framework ensures pedagogical consistency on the topics of ethics, data privacy, and critical thinking. In this governance model, central direction prevents regional inequality and allows France to roll out its education strategy quickly at scale—although not without resistance from some groups.
The Challenges Facing Stakeholders
The opportunity presented by AI education is real, but so are the structural headwinds. Over half of all employees globally will need to reskill or upskill by 2027. Many skills expire faster than they can be replaced: the half-life of technology-related skills has shrunk from roughly five years to two and a half. The education system faces a shortage of 44 million teachers by 2030, even as public budgets tighten. And BCG research shows that most organizations still lag in establishing responsible AI guardrails, posing a risk that increases when the users are students and the stakes are developmental.
The leaders we spoke with describe a near-universal set of challenges in this area. (See the exhibit.) Such obstacles are not excuses for inaction; instead, they call for planners to devise bold implementation strategies to overcome specific barriers.
The Systems That Inspire Us
Both the barriers and progress against them are well documented. Across the countries represented at Bett, leaders are addressing impediments to AI-enabled education in four distinct ways: setting a clear ambition, investing in people, dismantling structural barriers to adoption, and innovating at scale. No country has perfected any of these approaches, but the systems that inspire us are moving deliberately, learning from early missteps, and building toward durable success.
Setting a Clear Ambition: The “Why” of the Work
There’s no point having a fundamental policy if only I as the minister believe in it; you need the whole system aligned and moving together.
Rolling out AI tools is a necessary but insufficient step. Countries on the path to durable progress start by asking, “What kind of a future are we preparing learners for, and what does that demand of our education system today?” Answering those questions clearly, publicly, and in alignment with national economic priorities distinguishes systems that are undergoing genuine transformation from those that are still finding their footing.
Setting ambition is not an abstract exercise. It entails defining outcomes, documenting talent and skill gaps, building multiyear roadmaps, and working toward alignment across ministers, government departments, and private partners. (See “Aligning Vision with Government Priorities: Saudi Arabia and California (US).”) In the private sector, transformation hinges on C-suite commitment. In the public sector, the equivalent element is coherence across funding, procurement, implementation, and governance functions. Achieving that coherence is hard and rarely complete, but systems that actively pursue it move faster than those that do not.
Aligning Vision with Government Priorities: Saudi Arabia and California (US)
Saudi Arabia
Saudi Arabia’s transformation begins with a clearly articulated national vision. The government is working to align the nation’s learning and training ecosystem with Vision 2030, with the ultimate goal of positioning education, digital capability, and human capital development as part of the Kingdom’s broader economic ambitions.
To implement this vision, the government has advanced collaboration across multiple departments. National AI efforts are supported by the Saudi Data & AI Authority, with the Ministry of Education, the National Curriculum Center, and other government stakeholders contributing to curriculum development, teacher capability building, and broader digital skilling efforts. Recent initiatives have included the introduction of AI-related curriculum content in public education, teacher upskilling programs in AI and data literacy, and wider national efforts to build AI awareness and capabilities across the population. These include SAMAI, a national initiative that has trained more than a million citizens in AI-related competencies.
California
In the US, the state of California defined an education vision centered on three core ambitions: ensuring that 70% of working-age Californians will have earned a postsecondary degree or attained relevant certification by 2030; embedding access to higher education as a measurable systemwide priority; and aligning higher education with workforce and economic mobility goals in high-demand sectors.
To execute this strategy, the state created a framework that defined priorities and necessary progress via the Governor’s Council for Career Education and established multiyear agreements for institutions to work toward shared student success goals, with regional flexibility in implementation. The agreements also linked public funding to outcomes and performance metrics such as improved enrollment, completion, and graduation rates; greater affordability; reduction of the existing gap in access to higher education; and better workforce preparation outcomes.
Investing in People: The “Who” of the Work
The real constraint in AI transformation is not the technology; it is people’s capacity to change and the system’s ability to support them.
AI-enabled transformation is primarily a people and change management challenge. This is clear in the business world, where BCG has found that leading organizations consistently follow the 10-20-70 principle: they focus 10% of time and energy on the algorithm; 20% on data and technology; and 70% on people, processes, and cultural transformation.
The education sector is no different. AI generates a lot of emotions—excitement, reluctance, caution, urgency—and both learners and teachers need to be convinced of the technology’s value to them. The 2026 Digital Education Outlook published by the Organization for Economic Cooperation and Development (OECD) notes that, despite positive data points, educators are fearful that AI could undermine their autonomy and compromise safe learning. And even though many students have fully jumped onto the usage bandwagon, researchers have found notable differences in adoption across different ages and geographies—as well as shared concerns about the accuracy, reliability, and ethical implications of AI.
A significant proportion of students say that they feel underequipped for an AI-enabled workplace, and a relatively small percentage of teachers feel confident about their ability to guide students. The constraint is not primarily a matter of technical training; it extends to limits in motivation, belief, and institutional support. BCG's experience is that educators who understand why AI matters to their students' futures and who feel genuinely supported in experimenting are far more effective than those who receive tool training alone. (See “Prioritizing People: São Paulo (Brazil) and Singapore.”)
Prioritizing People: São Paulo (Brazil) and Singapore
São Paulo
Brazil’s most populous state, São Paulo, designed its AI strategy with a focus on augmenting teacher capacity to support rigorous instruction. São Paulo sought to elevate students’ writing capabilities by encouraging more open-ended responses and essays—in-class and as homework—and to advance critical thinking through back-and-forth discourse. The critical bottleneck is the time teachers spend grading dozens of exercises and interacting one-on-one with students.
By deploying ChatGPT in a structured way, São Paulo has enabled educators to quickly grade written responses amid an upswell in volume, as students now write roughly ten essays a year. The government has also used competitive procurement of AI tools to establish cost-effective partnerships with providers such as Open English, which allows students to practice their foreign language skills with AI in a secure, closed environment. Despite lingering concerns about appropriate AI use by students, São Paulo focuses on implementation details, from enhancing tool reliability and user experience to deploying anti-cheating mechanisms to build educator confidence and competence.
Singapore
Singapore’s approach to AI skills transformation demonstrates how governments can design for scale across the entire learning life cycle, involving industry in curriculum design and aligning educational funding with workforce needs. Through the Ministry of Education's EdTech Masterplan 2030, Singapore deliberately links early education to lifelong learning. The country embeds AI-infused learning—personalization, digital literacy, and responsible AI use—in the nationwide Student Learning Space. At the same time, initiatives such as AI Singapore Student Outreach Program progressively build students’ AI capability from primary school through pre-university levels.
This architecture extends into adulthood through the National AI Strategy 2.0 and the SkillsFuture ecosystem, which provide structured pathways for midcareer and continued learning. One example is the Rapid & Immersive Skill Enhancement (RISE) program, which equips midcareer professionals and career switchers with high-demand digital, data, and AI-related skills through hands-on training, industry projects, and career support, offering substantial subsidies to lower barriers to participation.
Dismantling Barriers to Adoption: Freeing Infrastructure, Funding, and Policy
Overcoming policy-, budget-, and infrastructure-related barriers to AI education takes ambition and deliberate action.
Infrastructure is a foundational need—but stakeholders can’t wait for it to catch up. A harsh reality faces stakeholders: without reliable infrastructure, efforts to teach AI can falter. Many markets have to deal with problems involving connectivity, energy demands, and device availability. Locking in investments in devices, data centers, cloud, or energy supplies will remain an ongoing agenda point for policymakers.
Waiting for perfect conditions before committing to take action is itself a strategic choice, and it can be a costly one. Ukraine is advancing its STEM-oriented AI curriculum despite being engaged in active military conflict, as it prepares its workers to rebuild the country’s physical infrastructure. Several African countries, including Rwanda, Kenya, Senegal, and Uganda, are expanding and pairing school connectivity with digital learning platforms—some of them zero-rated, offline-capable, or mobile-accessible—to reach underserved learners. (See “Pushing Beyond Infrastructure Limits: Ghana.”)
Pushing Beyond Infrastructure Limits: Ghana
The Ministry of Education is overseeing the rollout, coordinating with key national education bodies in a partnership-led operating model. Funding and technical implementation support come from the government, development finance partners, and the private sector. Sustained progress will require ongoing interaction with political leadership, long-term investment commitments, and blended funding mechanisms—but momentum is heading in the right direction.
AI innovation needs funding innovation; old models won’t work. Many education systems still think of technology as a one-time purchase rather than as a long-term investment. But a comprehensive assessment of the full life-cycle costs of AI would layer in infrastructure, data architecture, teacher upskilling, maintenance, cybersecurity, and continuous platform upgrades on top of the core technology.
Cost transparency is necessary, but systems could also use smarter funding strategies tied to intended outcomes. At Bett, leaders described mounting fiscal pressure due to combinations of shrinking public investment in education, problematic debt servicing costs, and declining development aid. Against this backdrop, policymakers and multilateral organizations such as UNESCO are promoting exploration of new models, including results-based financing, income-contingent loans, and consortium purchasing.
Funding transformation requires more than fiscal creativity. It calls for a shift in the way systems define and measure success. (See “Why Old Ways of Financing Won’t Work.”) Leaders who can articulate clear outcomes, build data systems to track them, and structure partnerships around shared accountability will unlock financing models that input-based budgeting never could. The technical instruments exist. The limiting factor is institutional readiness to use them.
Why Old Ways of Financing Won’t Work
- Systems lack multiyear funding certainty. This shortcoming makes it difficult to sustain infrastructure and AI investments beyond pilots.
- Administrators rarely explore innovative financing models. Possibilities include results-based financing, leasing, blended finance, advance market commitments, and consortium purchasing.
- Outcomes-based financing is risky and difficult to structure. The culprits here include gaps in baseline data, unclear outcome definitions, and limited measurement capacity.
- Fragmented, slow administrative cycles hamper agile digital transformation. Rigid annual budgeting, siloed procurement processes, and misaligned ministry timelines contribute to this problem.
- Procurement without implementation support does not translate into impact. Unless the system builds teacher training, time, and system capacity into the program, utilization tends to remain low and financing logic breaks down.
- Cross-sector partnerships are poorly structured and defined. Missing frameworks for roles, risk sharing, accountability, and ownership can lead to confusion over responsibilities, faulty communication, and weak administration.
Policy ought to protect, not encumber. At Bett, we heard a call for targeted policies to address AI usage, with guardrails around data privacy, classroom integrity, and fairness. Trust is paramount. The goal is for students to engage in learning without taking intellectual shortcuts or being subjected to inappropriate oversight or profiling. When appropriately managed, access to and engagement with technology will not reinforce bias with regard to racial, socioeconomic, gender, or disability-based groups.
Multilateral organizations, including OECD, UNESCO, and UNICEF, play a key role in shaping AI policy, bridging the gap between rapid technological change and slower-to-move national regulation. For example, UNESCO launched its Recommendation on the Ethics of Artificial Intelligence in 2021, and since then 193 member states have adopted it as the global standard for AI governance. The Recommendation emphasizes human rights, transparency, accountability, fairness, and data protection and allows innovation to uphold public values.
Meanwhile, countries and regions are taking steps to craft their own policies, with input from local stakeholders. (See “Putting Policy into Action: Estonia.”)
Putting Policy into Action: Estonia
It is the first nation in the world to conduct all government services, including voting, entirely online. Students and educators see technology constantly, in daily life and at school. Estonia’s advantage lies in its cumulative capability, built over decades, encompassing infrastructure, interoperability, digital trust, and alignment between education and economic ambition.
Building on this legacy and championed by leaders all the way up to the president, Estonia’s newest AI program, LEAP, puts AI tools directly in the hands of students and teachers, along with training in how to use the technology safely and effectively. The program ensures that teachers understand how AI works, what it can and cannot do, and where related risks lie in classroom settings. Governance acts as an enabler of responsible use rather than as a brake on innovation, signaling awareness that overregulating AI could deter adoption in schools. Jointly funded by government and private partners, the program has already reached 20,000 upper secondary students and 3,000 teachers.
Innovating at Scale: From Pilots to Critical Mass
There are many successful pilots, but scaling what works is far more complex than creating it. We don’t lack examples of innovation—we lack the ability to turn them into coherent, system-wide transformation.
Pilots are easy; scaling is hard. Education systems today are peppered with islands of innovation that deliver compelling results at the classroom, district, or state level but rarely translate into systemic change. BCG has found that although over half of the organizations it surveyed report experimenting with AI in limited areas, only 8% are doing so at scale. The public sector lags even more, despite facing some of the highest perceived disruption risk from generative AI.
The countries that are making the most progress in this area share a common orientation: they treat scale not as the final stage of innovation, but as a design principle from the start. (See “Scaling Success Stories: Kazakhstan and the US.”) They identify barriers specific to their context and digital maturity—connectivity, funding, policy climate, educator readiness—and build implementation architectures that naturally accommodate growth. Critically, they do not wait for certainty. They combine rapid iteration with clear guardrails, they treat early failures as learning cycles, and they move faster than the technology itself.
Scaling Success Stories: Kazakhstan and the US
Kazakhstan
Kazakhstan has set a national ambition to become a regional research hub and leading academic destination, with AI positioned at the strategic center. To achieve this ambition, Kazakhstan has taken several steps to scale AI adoption across the whole country. Through the AI Sana initiative, Kazakhstan made AI a compulsory subject in the curriculum, reaching over 95% of students. The country also launched initiatives to extend AI upskilling to the broader workforce, while partnering with private-sector providers—including a global hyperscaler—to equip educators and students with training programs, licenses, and computing resources to build and train their own AI models and agents. Within a short period, these efforts have expanded across K-12 education, higher education, vocational training, and incumbent workforce development, reflecting a whole-of-society approach to building AI capabilities.
Kazakhstan’s leaders emphasize that the critical enabler has been the adoption of speed as a governance principle, and they cite indecision as the greatest risk. The government is experimenting with a venture-capital-style decision model, deploying policy, training, and tools in parallel rather than sequentially and treating early failures as learning cycles. The government moves quickly in forming partnerships with international institutions and private companies, recognizing that collaboration windows can close as fast as they open.
US
Several US states are beginning to embed AI-relevant skills into their workforce plans. (A case in point is Michigan’s AI and the Workforce Plan, a recent addendum to its 2024 Statewide Workforce Plan.) They are also starting to implement financial incentives to encourage employers—particularly small and medium-size enterprises—to invest in AI and technology training for their employees. Examples include Michigan’s Going PRO Talent Fund, Ohio’s TechCred program, and Indiana’s Power Up initiative. These efforts provide participants with funding awards, training credentials, and tax credits for completing skilling programs.
As states translate pilot programs into sustained, system-wide workforce transformation, philanthropy has been a welcome partner, encouraging innovation around funding and incentivizing positive workforce outcomes. One prominent example is the Lilly Endowment’s recent $500 million initiative in Indiana, which aims to support higher education institutions in expanding career-connected programs and developing initiatives that prepare students for a future workplace shaped by AI. These efforts often extend across state lines and industries, creating shared platforms and coordinated strategies to accelerate AI workforce development at scale on a national level.
Bringing It All Together
Dozens of conversations later, our perspective on what drives successful digital transformation in education and workforce development has evolved.
Ambition is not what separates education systems that are transforming from those that are just beginning their journey. Nearly every minister we spoke with has a vision. Instead, the critical difference is disciplined translation of that vision into aligned action across institutions, funding streams, and people.
The levers that we have described are interdependent initiatives, which means that progress on each reinforces the others. Systems that invest in people without introducing structural reforms produce capable educators working inside capacity-constrained institutions. Those that pursue structural reforms without adopting a scaling mindset produce pilots that fail to reach more beneficiaries. The levers rarely move in perfect sequence, but consistently neglecting any of them undermines the rest.
For leaders who want to determine where to focus next, the diagnostic questions are straightforward: where are you on the AI-in-education maturity curve, and at which stage does your system's progress currently stall? The answer will point directly to the lever that demands priority attention.
The governments and education systems that are making the most progress share one further trait. They have stopped waiting for certainty before acting. There is no single model for AI-enabled education, but hesitation is a guaranteed path to failure.
Moving now is a big part of the answer to Bett’s key questions—what will drive successful digital transformation in education and workforce development, and how can we address and avoid the most common barriers to effective implementation? The window for decisive action is open. We are eager to see the global education community document progress, expand bright spots, and share emerging solutions in the months to come.
The authors thank Anna Bryant, Addie Chamberlain, Faisal Faraz, Kevin Fu, Amanpreet Grewal, and Dara Kovacheva of BCG and Ed Clark, Emily Colyer, Alexandra Constantine, and Anna Wood of Bett for their valuable contributions to this report.