Managing Director & Partner
Chief AI Ethics Officer, BCG GAMMA
Steven Mills is the Global GAMMA Chief AI Ethics Officer at Boston Consulting Group. He is a member of the BCG Center for Digital Government where he is the global lead for artificial intelligence in the public sector. He is a core member of GAMMA and the Public Sector practice.
Since joining BCG, Steve has supported a wide range of private and public sector clients in health, finance, aerospace, social impact, technology, and defense. His technical leadership includes AI product development, implementing complex machine learning use cases, and providing decision support through large-scale modeling and simulation. Steve’s leadership brings commercial best practices into government, including designing AI and analytic strategies, AI workforce development, and planning and executing AI training programs. He is a recognized expert in “Responsible AI,” helping a variety of public and private sector clients develop responsible AI strategies, implementation plans, and tools and training.
Steve is an invited member of the World Economic Forum’s Global AI Council, a group composed of ministers and heads of regulatory agencies, chief executives, and leading technical and civil society experts who provide strategic guidance and shape the direction of the World Economic Forum’s Centre for the Fourth Industrial Revolution. He is also a member of the Forum’s Responsible Use of Technology Working Group and the Center for a New American Security Task Force on AI in National Security.
Prior to joining BCG, Steve spent eight years at Booz Allen Hamilton where he served as the Director of Machine Intelligence and the Director of Booz Allen Futures, a business unit focused on exploring the atypical intersections between emerging technology and socioeconomic, environmental, and geopolitical trends. Prior to that, he worked as a forest planning/operations research analyst at FORSight Resources.
The discussion about artificial general intelligence is distracting people from the real risks we face today with generative AI.
Proposals for regulating AI are picking up speed, yet organizational readiness has yet to gain traction. With a responsible approach, companies can ensure compliance—and create value.
Everyone from customers to investors wants AI done right. CEOs who take the lead in implementing Responsible AI can better manage the technology’s many risks.
Get a jump on new requirements, including the upcoming European Union (EU) AI Act, by adopting BCG’s Responsible AI Leader Blueprint.
A new study by BCG and MIT Sloan Management Review finds that treating responsible AI strictly as a way to avoid AI failures is incomplete. Responsible AI leaders take a broader, more strategic approach that generates value for the organization and the world around them.
If business wants to use AI at scale, adhering to the technical guidelines for responsible AI development isn’t enough. It must obtain society’s explicit approval to deploy the technology.
AI can help governments deliver smarter policies, enhance services, and operate more efficiently. To make implementation succeed, policymakers need to learn from the leaders.
AI is not without risk, but governments ignore it at their peril. “Responsible AI” can help officials make thoughtful policies that improve lives.
To earn the public’s support, government use of advanced analytics must include stakeholder input, proper controls, regular reviews, and contingency plans for lapses.
Steven Mills writes for part of the World Economic Forum’s Pioneers of Change Summit, providing five tips for organizations to join the conversation about responsible AI.
Principles abound for socially responsible artificial intelligence. Here’s how to put them into action.