People Strategy in the Age of Generative AI | Hero

The global leader of BCG’s Behavioral Science Lab talks about why companies should focus on tasks more than on jobs, how they can train their managers to manage AI, the importance of ethical thinking and responsible acting—and more.

Meet Julia

BCG: Technology has been displacing jobs and creating new ones since the Industrial Revolution. Does generative artificial intelligence (AI) represent a continuation of that trend or a different type of change?

Julia Dhar: It surely will bring about a boost in the productivity that we achieve from humans and machines working together. Whether those gains are a step function or exponential remains to be seen. Instead of focusing on jobs, I urge organizations to focus on tasks that AI may have the proficiency to perform. AI will completely replace some tasks and augment others, such as basic research and preliminary analysis. For example, a machine can perform a task that will then be validated by a human, or a machine can refine and challenge a human’s creative thinking.

Learn More About Generative AI
Learn More About Generative AI
BCG-GenAI-website_homepage.jpg
Generative AI
Generative artificial intelligence is a form of AI that uses deep learning and GANs for content creation. Learn how it can disrupt or benefit businesses.

In the generative AI era, why should companies focus on tasks and not just jobs?

It’s seductive to ask what jobs are being replaced. I’m not convinced that it’s helpful to ask that question. It means that we are trying to identify future opportunities given today’s job constraints—such as the way people are performing those jobs and the collaborations that people have with other human beings or technologies. It is more useful to ask what tasks are being performed and whether they are susceptible to replacement or augmentation.

First, you get a much more precise answer about the nature of future opportunities. Can recruitment, for example, be replaced by AI? You might say, “No. A human wants to get hired by a human.” But can the process of application intake—the sorting of applicants and screening for biases to ensure a diverse candidate pool—be performed by generative AI? Surely, yes. That would free up talent acquisition colleagues to do the human-to-human interactions that help candidates understand and accept a new job.

Second, focusing on tasks psychologically frees up space in an organization and lets people participate in the process of defining how they want to work with AI. It is threatening for people to imagine parts of their job or all of it being replaced by generative AI. Even if change opens up new and exciting opportunities, most humans fear change. When we begin to focus on tasks, we reduce the threat to a person’s identity as a professional and, instead, can begin to talk about the work itself.

Finally, we can have real conversations about work that is low value, not very enjoyable, and not very safe (such as certain jobs in manufacturing). One benefit of generative AI is that it can perform work that people don’t like to do and that might not be safe for them to do.

GPT-4 performs far better than most humans on standardized tests, such as the Graduate Record Examination and Law School Admissions Test. How should companies think about harnessing this book-smart intelligence to help humans solve messy real-world problems?

There are many problems in our world that have analytically correct answers. Those are what generative AI, including GPT-4, produces and how it outperforms on standardized tests. Having this capability is good news because it allows us to assimilate information, find the analytically correct answer, and generate new questions that we may not have imagined previously much more rapidly.

That’s exactly how companies should think about using generative AI—as the first step in an analytical process. But companies should also invite a much more wide-ranging conversation about creative solutions to problems. We should also remember that data is nothing without a story. Humans are narrative beings, and executives have an opportunity to create a story that connects the data to people and the organization’s purpose.

How do you allow employees to experiment with generative AI and make sure they operate within responsible AI guidelines without getting bogged down by bureaucracy?

As we think about the use of generative AI in companies, organizations, and society, human dignity and equity must be at the forefront. Those must be our overriding criteria, and they are being translated into responsible AI codes of conduct and other policy mechanisms inside organizations.

How, then, do we allow human creativity to play into that? We need to educate people about the potential uses and pitfalls of AI. That is the responsibility of HR and learning and development organizations, the leadership team, and technology leaders. We also need to provide employees with guardrails—specific parameters of acceptable and unacceptable use—and encourage creativity within them. These ground rules should be more specific than a code of conduct. They should cite productive uses of generative AI, make the black box more transparent, and show people, in part, how an answer was created.

Finally, organizations have a huge opportunity to collaborate not only with researchers who can provide external oversight and accountability but also with foundations and civil society to push toward adopting healthy parameters of creativity. At BCG, we would call that the art of the possible.

Are ethical skills and awareness more important in a generative AI world?

Yes. Some answers, analytical processes, and recommendations will become less transparent, less readily auditable by humans. That’s unavoidable. That is partly the nature of machine learning. It is a feature, not a bug, of large language models and generative AI. The ability to think ethically and critically about the behavior of a machine that you can never fully understand is essential. Organizations need to teach employees how to think ethically and critically, how to create mechanisms for external oversight and scrutiny, and how to safely raise concerns and have those concerns addressed.

How do companies go about training managers to manage AI?

Managers need to get to know AI like a colleague. They need to appreciate what generative AI can do but recognize that it comes with complexities and limitations, like all of us. That’s the attitude and mindset that we would encourage for leaders. Managers need to get clear about the early tasks that generative AI can help with in their organization. And they need to think carefully about the human element as well. What might people find threatening and not so threatening?

We would invite executives to think about productivity. Some of the productivity gains from technology improvements—including those that advanced email, laptops, cellphones, and smartphones—in the past 50 years in the US have been somewhat disappointing. We have not realized the labor productivity dividends we anticipated. Some of that is a lack of attention to the true source of productivity gains and not being clear about the value of human intervention and technology intervention.

When you talk to business leaders about generative AI, what one or two questions do they ask the most?

Is it safe? Will it boost productivity?

Subscribe to our People Strategy E-Alert.