Tim O’Reilly on Managing in the Age of Algorithms

An Interview with the Founder, Chairman, and CEO of O’Reilly Media

By Massimo Portincaso

'

Tim O’Reilly, the noted technology publisher, entrepreneur, and activist, has been both a cheerleader for and critic of the digital era. He helped popularize open-source software and Web 2.0 but has been skeptical of overly broad software patents and other overreaches by tech companies.

The title of his recent book, WTF: What’s the Future and Why It’s Up to Us, reflects this dueling view, with “WTF” serving as a declaration of both wonder and chagrin. While the overall tone of the book is positive, O’Reilly recognizes that algorithms can go wrong and technological advances can contribute to wealth inequality and job loss. Ultimately, WTF is a call for business leaders to act responsibly in the digital era by embracing a purpose that is greater than the bottom line.

O’Reilly recently sat down with Massimo Portincaso, a partner in the Berlin office of The Boston Consulting Group and the leader of the firm’s marketing and communication activities. Excerpts follow.

Tim O'Reilly

WTF: What’s the Future and Why It’s Up to Us. Cool title for a book. In it, you write about seeing the future in the present and that there is not just one future. Can you tell us a little bit more about this?

We have to think things that are unthinkable. We have to actually understand that perhaps the current structure of our entire economy—the current structure of our companies—is being challenged. We have to think new things.

In your book, you write about Lyft and Uber from a different angle, which I liked a lot. Why are these two companies particularly important for you?

Lyft and Uber teach us a story about the Internet and distributed sensors coming into the real world. All of these things that we thought were just digital are now affecting physical businesses. They have so many lessons to teach us. The reason why Uber and Lyft can deliver much higher availability is this swarming marketplace model of drivers who come on when there’s more demand and go away when there’s less demand. You’ve also got this amazing algorithmic dispatch system that’s connecting drivers and passengers. You’ve got this marketplace. You’re also convincing people to change their behavior. You look at all of those things, and you start to see that these are lessons that apply to every company in today’s economy.

You write about thinking in promises. You have the example of the press release that Amazon issues before it even starts a project. Can you tell us more and why promises are so important? 

The idea of thinking in promises is you don’t tell someone how you’re going to do something. You simply promise them what you’re going to deliver. That fundamental design principle, for example, of Internet software—the fundamental design of Unix, the operating system that in some ways gave birth to the Internet—was simply: You can rely on these inputs. You can rely on these outputs. You don’t need to know what’s inside because it’s basically about cooperating components. That cooperating-components theory really now has to extend to the organization. Companies like Amazon teach us how to do that.

Why are algorithms so important in this context?

We now have these systems with incredible speed and scale that are allowing us to manage data that we could never manage with simple humans-only algorithms. The actual workers at a company like Google or Facebook are not actually all those programmers. They’re the programs. Who is serving up your product offerings when you visit an Amazon webpage? It’s not a person. It’s a program.

That’s where algorithms can also go wrong. They’re a little bit like the genies of Arabian mythology. You give them your wish, and they do exactly what you ask them. If you didn’t think through properly what to ask, you get unexpected results. That’s always what happens in those stories. And that’s what Mark Zuckerberg and his team are dealing with right now. They didn’t realize that Russia would realize that Facebook was fertile ground for disinformation and cyberwarfare.

Part of the job of managing an algorithmic system is realizing that there are bad actors. They will always try to game the system.

You always put a lot of emphasis on the fact that the future is up to us. What are the key things we need to do to make sure that the way we shape the future is good for us?

We say we want an economy of opportunity, but are we measuring whether we have it? Well, yes, we are. Raj Chetty’s work at Stanford is showing us that opportunity is going down. But we’re not actually saying, “Well, what does that teach us about what we should do differently?” That again comes back to this idea that what platforms and technology teach us is that you have to keep changing the rules if you’re not achieving your objective.

If you had the chance to give a piece of advice to an average CEO, what would this be? If you could do the same with government, what would that be?

To every CEO I would give two profound pieces of advice. The first is to remember that your goal is to create value—not just for yourself but for your customers, for society, and for your employees. The second lesson is that you win by doing more—not by doing the same thing more cheaply.

For government, I think there are many, many lessons. Probably the first one is to take this experimental approach that the leading companies are always taking—of constantly being in a learning dialogue with the market. We need bold thinking that says, “We can do something that was previously impossible.”