Episode 497: Arvind Narayanan
Listen to Episode on:
Watch the Unabridged Interview:
Order Books
Spotting The Difference Between AI Innovation and AI Snake Oil
Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?
Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.
Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.
*unSILOed Podcast is produced by University FM.*
Episode Quotes:
What can the AI community learn from medicine about testing?
28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.
Replacing broken systems with AI can escalate the problem
08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.
Foundation models lower costs and could shift AI research back to academia
27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas.