Episode 49: Pedro Domingos

Listen to Episode on:

 
 

Watch the Unabridged Interview:

Order Books:

 

The Quest for the Master Algorithm and the Ultimate Learning Machine

For a while now, machines have been inseparably tied to our lives. The algorithms on Google, Netflix, Amazon, Xbox, and Tinder have run your life unwittingly. Machines are digesting data that you willingly share with them. Artificial intelligence has also impacted healthcare, from the development of vaccines to the search for a cure for cancer. Machine learning is transforming every aspect of our lives, but what is AI's ultimate foundation?

Author and AI expert Pedro Domingos discusses machine learning's five tribes in his book Master Algorithm. During this episode, Pedro shares how the ultimate algorithm can derive knowledge about the past, the present, and the future from data. Listen as he and Greg tackle why such an algorithm should exist and compelling arguments from neuroscience, evolution, physics, statistics, and other branches of computer science.

Episode Quotes:

Are computer scientists the new age philosophers?

“I don't think scientists could have supplanted the psychologists and philosophers, and so on. I do think, however, that computer science and machine learning, in particular, changes the way we do everything in a very profound way. If you look at science, more than anything else, its progress is determined by the tools that are available. Galileo was Galileo because he had the telescope. No telescope, no Galileo, and the examples go on. And the thing is that computers are the most extraordinary tool for science, among other things. But for science in particular that we have ever created, they magnify our ability to do things in a way that was —I think — hard to imagine, even 50 years ago.”

Is machine learning just a bunch of different tools, all trying different approaches to solve the same problems?

“At the end of the day, the best algorithm is almost never any existing one. What a machine learning algorithm does, it's not magic. It's incorporating knowledge, and knowledge will be different in different domains. There are broad classes of domains where the same knowledge is relevant, and indeed different paradigms tend to do well in different problems. So, deep learning does very well at perceptual problems because, again, you know, these things were inspired by the neurology of the visual system, and et cetera, et cetera.”

Is the evolutionary model applicable and aligned with what's happening in AI and will there be obstacles in pursuing this line of thinking?

“There's more to be discovered about how evolution learns. And by the way, there's more to be discovered for the purposes of AI and also for the purposes of understanding evolution. I actually think that if someone really had a supercomputer, that could simulate evolution over a billion years. With the model of evolution that we have today, it would fail. It wouldn't get there. There are some mechanisms that also evolved. But again, this is this interesting series of stages, right? Even within evolution, there are levels of how evolution works. And I think there's a lot of that, that we still don't understand. But we will at some point, and I think that will be beneficial both for biology and for AI.”

Time Code Guide:

00:03:06: How A.I. is revolutionizing the way we think

00:04:31: Tycho Brahe stage

00:06:44: Is the unified field theory of machine learning the same as the general approach to learning?

00:09:11: Computers represent the fourth stage of learning and  transmission of knowledge, do you think it's a discontinuity from the first three stages, which all seems to be natural phenomenon?

00:10:21: The emergence of AI, life, evolution of the nervous system, and cultures

00:12:01: The speed at which computers communicate and facilitate the transfer of Knowledge

00:13:10: Possibilities and ways you can play with the computer's processing capacity

00:14:29: How did we leap from the AI winter to the AI boom that we have today?

00:17:25: Learning machines and self-driving cars

00:18:48: AI and Linguistics

00:19:33: Do each AI ‘tribe’ have a singular view of pursuing a particular approach in AI without acknowledging that it can have limitations later on?

00:24:54: One paradigm in AI and Master Algorithm

00:27:13: The Rise of the Connectionist

00:28:00: What’s next for AI?

00:33:37: Is it possible to automate the trial and error process and have an algorithm  where we learn how to learn?

00:37:49: Is the evolutionary model doing anything for AI, and what are the obstacles in this line of thinking?

00:41:53: How do we know whether a school of ideas is dead or simply dormant?

00:43:01: How do you advance interdisciplinary learning within the different school of thoughts in AI?

00:44:24: Thoughts on Geoff Hinton's work and back propagation

00:46:22: Is there a guidebook to creating a unified theory?

00:48:11: AGI, AI and humans

00:51:01: Automating the Scientific Process

00:52:26: Thoughts on the Future of AI

Show Links:

Guest's Profile:

His Work:

Previous
Previous

Episode 50: Gregory Clark

Next
Next

Episode 48: Rob Dunn