Jürgen Schmidhuber, the scientific director of SUPSI’s Dalle Molle Institute for Artificial Intelligence (IDSIA), in Switzerland, has been a leading pioneer of artificial intelligence (AI) for three decades. His work with colleagues on recurrent neural networks, including long short-term memory (LSTM), and on other mathematical models and algorithms for solving AI problems has revolutionized several fields, including machine learning, handwriting and speech recognition, machine translation, and image captioning. These methods are now being applied in a wide variety of smart devices, including billions of smartphones, as well as in robotics. Teams led by Schmidhuber have been publishing research on AI applications for fields as diverse as art, medicine, and music—while he continues the quest he began in the 1980s to develop general problem solvers. BCG senior partner and managing director Philipp Gerbert recently sat down with Schmidhuber to discuss his views on the present and future of AI.
Jürgen, it’s a privilege to have you here as one of the pioneers of artificial intelligence and, more specifically, deep learning—its hottest field right now. Before you go into all of those fields, we would like to understand the person Jürgen Schmidhuber better. Perhaps you can tell us a few things that you’re particularly proud of in your career.
One of the things I’m proud of: I think I understand what it means to be curious and how to implement curiosity, which I think is essential to build agents that learn from experience through their own self-generated experiments. Agents who are motivated to invent, in a directed way, action sequences or experiments that lead to data that tell them something about how the world works that they didn’t know yet. If you Google “artificial curiosity,” you will end up on our pages and learn all about that.
Another question I have that often tells what people stand for, is, what is something you believe in, but you think that 95% of humanity would disagree with you?
Since the ’70s and ’80s, I have believed that intelligence is a simple thing and that, in the end, all of the essence of intelligence can be condensed into a short code—ten lines of pseudocode or something—which includes everything that you need to build a continually self-improving system. My first publication on that with concrete algorithms dates back to 1987, to my diploma thesis. In the past 30 years, I have kept working on this grand problem of AI, and I think we are rather close to the final solution.
You say intelligence might be a simpler concept than most people think. But many struggle with what artificial intelligence really means. Could you explain it to us?
All of natural intelligence and artificial intelligence is about problem solving. In AI, we are trying to build general problem solvers that can solve not only one little problem here and one little problem there but many, many different problems that are practically irrelevant in this initially unknown environment we are living in. We want to build machines and robots and agents that learn to deal with basically arbitrary, initially unknown environments and then learn to solve pretty much arbitrary problems within these environments.
There’s a lot of hype right now. What do you feel are things that are really exaggerated right now, and where might there be things that are still underappreciated about AI in the current environment?
I don’t think that there are too many exaggerations right now. At the moment, we are still experiencing this trend that basically says that every five years, computing gets ten times cheaper. That trend has held since 1941, when Konrad Zuse built the first working program-controlled computer. At the moment, we still have rather small neural networks compared with the human cortex.
You have 100,000 times more connections than one of these little artificial networks. However, this is just a period of 25 years, which means that by 2041, we should be able to get, for the same price, large LSTM networks that can compute or that have as many connections as a human cortex, and these will be much faster than the wet connections I have in here because these will be electronic connections. So even if there are no further algorithmic breakthroughs, we will still see lots of superhuman performance results by just scaling the existing things up through the faster hardware.
Right now, speech and text recognition is being solved so a lot of human knowledge becomes accessible to machines. At the same time, vision allows computers to navigate the real world. That obviously leads to lots of fears about the ability of humans to adapt to these changes so fast, since timescales have decreased. Any view that you might have on this subject?
Predictions of job losses through robotics are old. Many decades ago, people predicted that robots were going to take over all kinds of jobs. But then what happened is that those countries where there are lots of robots per million capita, they all have low unemployment rates. Countries such as Japan, Germany, Korea, Switzerland have many robots per capita by international standards but rather small unemployment rates. In the ’80s I already said it’s easy to predict which jobs are going to disappear, but it’s hard to predict which new jobs are being created.
If you look a bit further in the future and say there might be real superintelligence, is it potentially dangerous? And can we or should we slow our efforts down to develop this?
In the long run, AIs are going to be much smarter than humans. Should we be afraid of them? I don’t think so, because most beings are mostly interested in those who are similar to themselves. Look at yourself—you are mostly interested in other humans like yourself because with those you can either collaborate to achieve goals or you can compete. You share goals, and that’s the reason why you are interested in these potential competitors or collaborators. That’s the reason why most politicians are interested in other politicians, and most reporters are interested in other reporters, and most frogs are interested in other frogs. And those superintelligent AIs of the future will be mostly interested in other superintelligent AIs of the future. And not so much in frogs and humans and ants, just like you are not so interested in all of these ants out there. Just because you are smarter than the ants, you are not going to kill them. No. The weight of all of the ants on this planet is still comparable to the weight of all humans, and there are still many, many more ants out there than humans.
Jürgen, thank you very much for this very interesting interview. The good news is we definitely continue to live in interesting times, and I would enjoy continuing the discussion. Thank you very much.
Jürgen Schmidhuber is the scientific director of SUPSI’s Dalle Molle Institute for Artificial Intelligence (IDSIA), in Switzerland.