An open letter with prominent signees calling for a temporary halt to development of tools like ChatGPT has created a stir in the AI community and drawn the attention of policymakers. BCG Chief AI Ethics Officer Steven Mills explains some of the key implications for business:
  • Regardless of whether one thinks some of these tools went to market too quickly, to most people in the industry, there’s no going back. The technology is out, and the race is on to build it. There are also major real commercial and national competitiveness issues.
  • For users of generative AI, continued real-world experimentation is important. The uses and hidden capabilities of generative AI models are not yet fully understood. Without this knowledge, it will be difficult to regulate these models and mitigate risk.
  • The private sector must engage in dialogue with policymakers to inform them of how safeguards can be implemented in ways that allow for both international and commercial competitiveness. It’s also critical for companies to enact responsible AI practices.
The overarching goal is to strike the right balance between AI experimentation and risk, so organizations can gain the full power of these technologies while avoiding the potential hazards.



The Growing Scrutiny of Generative AI Hero Rectangle

The Future of Life Institute created a stir in the artificial intelligence (AI) community on March 22 by releasing an open letter calling for a six-month halt in the development of generative AI models to allow for a more thorough study of the risks. Among the more than 20,000 signers, as of mid-April, are Tesla CEO Elon Musk and Apple cofounder Steve Wozniak. We discussed the significance and implications of the letter with BCG Chief AI Ethics Officer Steven Mills.

Meet Steven

BCG: Why has the Future of Life Institute letter gotten so much attention?

Steven Mills: Open AI’s recent release of ChatGPT, which is able to create original content in response to user questions or prompts, has created incredible hype over generative AI. ChatGPT’s human-like interactions had already caused some people to argue that we should hold on and slow development. When this letter came out, with some noteworthy signatories, the topic grabbed even more attention.

Alarms over AI have been raised for years. Why did ChatGPT inspire such a strong reaction?

Part of it, I think, has to do with mistaken identity. Concerns over AI have traditionally been about artificial general intelligence, or AGI, systems that could someday have the general cognitive ability of humans—and may eventually far exceed them. Many people have conflated generative AI with AGI. There is a lot of skepticism that generative AI models like ChatGPT are putting us on a near-term path to AGI. Fundamentally, these are pattern recognition models—they are not learning facts and concepts that they then use to answer questions or carry out tasks. It turns out, however, that pattern recognition is an incredibly powerful ability. It enables models to perform countless tasks amazingly well. But this isn’t artificial general intelligence.

The discussion about artificial general intelligence is distracting people from the real risks we face today with generative AI.

In fact, the discussion about AGI is distracting people from the real risks we face today with generative AI, like the disclosure of sensitive information and questions about intellectual property ownership.

What do business leaders you speak with think of the call to halt generative AI development for six months?

Learn More About Generative AI
Learn More About Generative AI
GenerativeAI-Hero.jpg
Generative AI
Generative artificial intelligence is a form of AI that uses deep learning and GANs for content creation. Learn how it can disrupt or benefit businesses.

The letter is making people pause a little and try to understand the context better. But it’s interesting to note that the list of signers is heavily skewed toward academics and policymakers, rather than people at the tech companies developing the foundational generative AI models. A jaded view could be that few tech companies signed because they want to make money from generative AI. But I think there’s a more pragmatic reason. To people in tech, the cat is out of the bag. Regardless of whether you think some of these tools went to market too quickly, the technology is out there, everyone’s going to build it, and there’s no going back at this point.

There’s also a very real national competitiveness issue. Every big national AI power is active. No country wants to risk being left behind. And they fear that if they pause, and other countries refuse, they’ll be disadvantaged. The only way this would work is if every country agreed to pause, which I believe is unlikely. And if even all did, private actors—whether companies or individuals—can still build models. Granted, it takes a lot of money. But not national-level sums. In fact, one research group recently used readily available technology and data to create a surprisingly advanced generative AI model for a few hundred dollars. This shows that even if companies have a real competitive advantage now, it can be fleeting. So everyone is really motivated to keep racing ahead.

How about companies that use generative AI models? Should they pause?

I don’t think the answer is to stop all use of generative AI. It’s an exciting and valuable technology that can accelerate innovation and revolutionize work. And there are use cases that can be commercialized today with the technology that is already available.

How can you regulate generative AI and mitigate risk if you don’t know how these tools can be used and what they can do?

Another major reason is that real-world experimentation is important. We don’t fully understand how we can use generative AI and all the hidden capabilities these models possess. How can you regulate generative AI and mitigate risk if you don’t know how these tools can be used and what they can do?

In just the past few months, we’ve seen many examples of risks from generative AI. Some are as simple as an employee pasting meeting notes that contain sensitive proprietary data into a generative AI service to create a summary. One news service that used generative AI to write articles discovered they contained factual errors and plagiarized text. None of this was malicious; people just weren’t thinking about the risks or limitations of these tools.

I’ll give you a more extreme example of unforeseen risk. There is a case where somebody prompted ChatGPT to create a virtual Linux server—complete with a connection to the internet and the ability to write and run source code. Imagine the cybersecurity implications. If this ran on your network, you might have no idea whether this virtual server would allow malicious actors to do malicious things, like extract data. Who in the world would have thought generative AI tools would have that capability? If people don’t experiment with these tools, we won’t know of these kinds of risks.

The bottom line is that companies should absolutely be experimenting with this incredibly powerful technology; they just need to do so in a thoughtful way and be aware of the potential risks.

Will the letter influence policymakers?

The letter has definitely drawn their attention. Italy placed a national moratorium on access to ChatGPT on the grounds that it violates the EU’s General Data Protection Regulation and doesn’t adequately limit access by minors. Several other European countries have started investigating, as has Canada, and I wouldn’t be surprised if others follow suit. Chinese regulators have proposed, among other things, a requirement that companies wishing to provide generative AI services to the public first obtain a “security assessment.” In the US, the Biden Administration announced it is examining the risks of generative AI. So I expect we’ll see a rather uncertain regulatory environment over the coming months. That makes it particularly important for businesses to invest in responsible AI practices alongside their investments in AI and generative AI.

On a positive note, it seems the letter has stimulated a lot of discussion of generative AI’s risks. That’s good, right?

Yes. Launching an open, balanced dialogue on generative AI into the public sphere is quite important. There are real risks that need to be addressed as we commercialize this tech and harness its huge potential. An informed discussion will provide more confidence in use cases we pursue while identifying those we should delay until we know how to mitigate risks.

We need a dialogue between the private sector and policymakers on generative AI. By engaging in this discussion, business leaders can inform policymakers on how rules can be implemented in ways that allow for both international and commercial competitiveness while establishing the appropriate safeguards.

What should companies be doing to ensure they use generative AI safely and responsibly?

We have a framework for responsible AI that encompasses strategy, processes, governance, and tools. But with generative AI’s arrival, the near-term emphasis shifts toward culture and vendor management. Culture is now more important than ever. Previously, you could focus just on your AI developers. But generative AI has democratized AI. Anyone can be a developer. So you need to instill AI risk awareness and responsible AI practices among everyone in the organization. In addition, the cost of developing generative AI models means most companies will access them through outside vendors. Ensuring responsible AI considerations are integrated into vendor management is vital. Both of these take time, of course.

Subscribe to our Artificial Intelligence E-Alert.

In the meantime, we urge companies to clearly lay out the generative AI uses you are and aren’t comfortable with. Then convey these guardrails to all of your employees. Finally, set up a process through which people can get their questions answered and that can provide input on how to safely pursue use cases.

The overarching goal is to strike the right balance between AI experimentation and risk, so organizations can gain the full power of these technologies while avoiding the risks.