Episode 487: Gary Marcus

Listen to Episode on:

Watch the Unabridged Interview:

Order Books

Challenging AI’s Capabilities

In the last five years, artificial intelligence has exploded but there are a lot of holes in how it works, what it is and is not capable of, and what a realistic future of AI looks like. 

Gary Marcus is an emeritus professor of psychology and neural science at NYU and an expert in AI. His books like Taming Silicon Valley: How We Can Ensure That AI Works for Us and Rebooting AI: Building Artificial Intelligence We Can Trust explore the limitations and challenges of contemporary AI.

Gary and Greg discuss the misconceptions about AI’s current capabilities and the “gullibility gap” where people overestimate AI's abilities, the societal impacts of AI including misinformation and discrimination, and why AI might need regulatory oversight akin to the FDA.

*unSILOed Podcast is produced by University FM.*

Episode Quotes:

Is Gary pessimistic about AI's future?

30:28: [With AI] I think the last five years have been a kind of digression, a detour from the work that we actually need to do. But I think we will get there. People are already realizing that the economics are not there, the reliability is not there. At some point, there will be an appetite to do something different. It's very difficult right now to do anything different because so many resources go into this one approach that makes it hard to start a startup to do anything else. Expectations are too high because people want magical AI that can answer any question, and we don't actually know how to do that with reliability right now. There are all kinds of sociological problems, but they will be solved. Not only that, but I'm somebody who wants AI to succeed.

Why AI hallucinations can't be fixed until we stop running the system

21:02: Any given hallucination is created by the same mechanism as any given truth that comes out of these systems. So, it's all built by the same thing. With your less-than, greater-than bug, you can work on it selectively in a modular system; you fix it. But the only way you can kill hallucinations is to not run the system. As long as you run the system, you're going to get it sometimes because that's how it works.

Should we help people cultivate their uniquely human common sense?

43:01: In general, critical thinking skills are always useful. It's not just common sense; a lot of its scientific method and reasoning. I think the most important thing that people learn in psychology grad school is whenever you've done an experiment and you think your hypothesis works, someone clever can come up with another hypothesis and point out a control group that you haven't done. That's a really valuable lesson. That breaks some of the confirmation bias and really raises one's level of sophistication. That's beyond common sense. It's part of scientific reasoning; those things are incredibly useful. I think they'll still be useful in 20 years.

Show Links:

Recommended Resources:

Guest Profile:

His Work:

Previous
Previous

Episode 488: Hilke Schellmann

Next
Next

Episode 486: Azeem Azhar