Confessions of an AI Optimist: An Interview with MIT’s Andrew McAfee

By Massimo Russo

'

Andrew McAfee and coauthor Erik Brynjolfsson made names for themselves by popularizing and animating technology concepts for the professional class in the 2014 bestseller The Second Machine Age and now in Machine, Platform, Crowd: Harnessing Our Digital Future. This latest book is, according to The Economist, “an astute romp through important digital trends.”

Andrew McAfee

At a Glance

Year born: 1967

Education

1999, DBA in technology and operations management, Harvard Business School

1990, MS in mechanical engineering and management, Massachusetts Institute of Technology

1989, BS in mechanical engineering and French, Massachusetts Institute of Technology

Career Highlights

2009–present, cofounder and codirector of the MIT Initiative on the Digital Economy, Massachusetts Institute of Technology

2009–2010, Fellow, Berkman Klein Center for Internet & Society at Harvard University

1999–2009, associate professor, Harvard Business School

In this interview with BCG, McAfee focuses on “machine,” the rise of artificial intelligence. McAfee is a big booster of most things digital, but he’s also a realist. He cautions, for example, that an AI engine is only as good as the data fed into it. Machines are still a long way from mastering many human tasks, and the biggest impediment to machine learning and other AI tools may be the imagination of business leaders. But he’s not worried about tech giants cornering the AI market, and he’s relatively sanguine about an automated economy in which many forms of work have disappeared.

Excerpts of the conversation between Massimo Russo, a BCG senior partner and managing director, and McAfee follow.

Andy, thank you for taking the time today to talk about a book that you cowrote with Erik Brynjolfsson, Machine, Platform, Crowd. Many discussions about artificial intelligence focus on the input and then the output of data through the training of the algorithms. How are companies going to avoid the garbage-in, garbage-out risk in artificial intelligence?

We’ve been dealing with garbage-in, garbage-out as long as we’ve been envisioning calculating machines. That issue does not go away in the era of artificial intelligence. It becomes more profound in the era of artificial intelligence because the approaches in AI that are succeeding today are not about really clever programmers codifying knowledge and putting it into a system; they’re about building systems that can learn on their own. And the way they learn is by seeing lots and lots of examples.

If the data that they’re learning from is bad, inappropriate, skewed, or not representative—has any of these problems that we know exist in data—you are going to get a poorly configured system. It’s just that simple.

Do you think that AI is destined to be owned by a few dozen companies globally?

I don’t have any trouble foreseeing a world where there is a set of companies that provide publicly accessible, cloud-based, API-based AI engines to other companies. That’s really different than saying that that same handful of companies will own all of the applications of AI. I don’t believe for a second that Google, Facebook, Amazon, Microsoft, and Apple are going to control 40% of the US economy because they’ve got the AI talent cornered. I don’t believe that for a second.

In your book, you also talk about social skills. You use the example of an algorithm identifying a medical diagnosis, but then the doctor or the nurse having the social skills to deliver the bad news. How do you see that progressing, and when will robots or computers have more social skills?

 I’m convinced that, in most disciplines within medicine, if the world’s best diagnostician is not a piece of technology today, it will be very quickly. The fact that human beings are front and center and essential for all kinds of medical diagnosis now, I just don’t think that’s going be the case in the future.

However, I think health care and medicine will still be very human-centric activities for a long time. Never say never, but I haven’t seen a technology that can establish an actual compassionate, empathetic social bond with human beings and bring them along like a coach, therapist, good manager, family member, friend, or loved one. These are all incredibly deep human social bonds. We’re hardwired by evolution to seek those out and respond to them. Cracking that problem, I believe, is a long, long way off.

In your new book, you talk about the age of intelligent electrification about a century ago. What does that mean, truly, and how does it apply to our economy today?

 One thing we know from studying business history is that the companies that are on top at the beginning of a big technology transition are usually not the companies on top at the end of that technology transition. At the start of the era of electrification, factories had great big steam turbines in their basements, and they powered every machine in the factory.

Unintelligent electrification was saying: All right. Now we’ve got a big old electric engine. We’re going to swap out that steam engine, put in one big electric engine in the basement. It’s more efficient. It’ll save on costs, and then we’ve got a more efficient factory.

The intelligent way to do it, which not many people got at first, was to say: Wait a minute. This lets us actually rethink what a factory is. Instead of just having one big machine in the basement, powered by steam, electricity lets us envision all kinds of really radical alternatives like putting a motor on every machine in the factory, which was crazy talk at the time.

The winners are the ones who realized that even if it didn’t make sense right at that particular time, someday every machine in the factory was going to have its own motor on it. And they headed toward that vision, as opposed to remaining mentally stuck in the past, stuck in the era of steam.

So what Eric and I wrote in the first chapter of Machine, Platform, Crowd is that it’s the mindset of the people running companies that is the biggest impediment to realizing the potential of new technologies. It’s not the expense. It’s not the skills that you need to have. It’s being able and willing to reenvision a business model in the face of really powerful new technologies.

You are an IT and AI optimist. What if the pessimists are right? What if we are headed toward an era of massive job disruption, displacement, and unemployment, as these new technologies proliferate and government spending and other measures to improve employment can’t keep up?

What you’re describing is an economy that is incredibly wealthy, without needing human work in the way that we came to think about it during the industrial era. OK, that’s a challenge. If we can’t handle the amazing prosperity that will be brought by an automated economy, if we can’t manage that crazy prosperity, shame on us. I don’t think it’s a trivial problem, but if we can’t address that problem, we've got no one—we literally have no one—to blame but ourselves.