Senior Solution Delivery Manager, Responsible AI
Abhishek Gupta is the senior responsible AI leader & expert with Boston Consulting Group (BCG) where he works with BCG's chief AI ethics officer to advise clients and build end-to-end responsible AI programs. He is also the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. Through his work as the Chair of the Standards Working Group at the Green Software Foundation, he leads the development of a software carbon intensity standard toward the comparable and interoperable measurement of the environmental impacts of AI systems. Abhishek also holds the prestigious BCG Henderson Institute Fellowship on Augmented Collective Intelligence where his research work is focused on how to draw out complementary strengths from hybrid collectives of human and machines actors to enable broader, faster, and deeper exploration and exploitation of problem and solution spaces.
Abhishek’s work focuses on applied technical, policy, and organizational measures for building ethical, safe, and inclusive AI systems and organizations, especially in the operationalizing of responsible AI and its deployment in organizations, and assessing and mitigating the environmental impact of these systems. He has advised national governments, multilateral organizations, academic institutions, and corporations around the globe. His work on community building has been recognized by governments from North America, Europe, Asia, and Oceania. He is a highly sought-after speaker who has given talks at the United Nations, European Parliament, G7 AI Summit, TEDx, Harvard Business School, and Kellogg School of Management, among others. His writing on responsible AI has been featured in the Wall Street Journal, Forbes, MIT Technology Review, Protocol, Fortune, and VentureBeat, among others.
Abhishek is an alumnus of the US State Department International Visitors Leadership Program representing Canada, and received The Gradient Writing Prize 2021 for his work on The Imperative for Sustainable AI Systems. His research has been published in leading AI journals and presented at top-tier ML conferences such as NeurIPS, ICML, and IJCAI. He is the author of the widely read State of AI Ethics Report and The AI Ethics Brief. He formerly worked at Microsoft as a Machine Learning Engineer in Commercial Software Engineering (CSE) where his team helped solve the toughest technical challenges faced by Microsoft's biggest customers. He also served on the CSE Responsible AI Board at Microsoft.
Large language model-powered virtual assistants are about to get between traditional companies and their customers, forcing executives to make tough choices sooner than expected.
Proposals for regulating AI are picking up speed, yet organizational readiness has yet to gain traction. With a responsible approach, companies can ensure compliance—and create value.
Everyone from customers to investors wants AI done right. CEOs who take the lead in implementing Responsible AI can better manage the technology’s many risks.
This powerful technology has the potential to disrupt nearly every industry, promising both competitive advantage and creative destruction. Here’s how to strategize for that future.
Your digital infrastructure probably generates more carbon emissions than you think—and AI may make it worse. It’s time for sustainable software.
Get a jump on new requirements, including the upcoming European Union (EU) AI Act, by adopting BCG’s Responsible AI Leader Blueprint.
A new study by BCG and MIT Sloan Management Review finds that treating responsible AI strictly as a way to avoid AI failures is incomplete. Responsible AI leaders take a broader, more strategic approach that generates value for the organization and the world around them.