Episode 493: Alex ‘Sandy’ Pentland
Listen to Episode on:
Watch the Unabridged Interview:
Order Books
What Human-Centered AI Looks Like
How is artificial intelligence reshaping social dynamics, knowledge sharing, and the workplace in the digital age?
Alex “Sandy” Pentland is a fellow at Stanford University’s Human Centered AI Institute and helped create the MIT Media Lab. He’s the author of numerous books including, Honest Signals: How They Shape Our World, Social Physics: How Social Networks Can Make Us Smarter, and most recently, The Digitalist Papers: Artificial Intelligence and Democracy in America.
Sandy and Greg discuss the evolution of social physics and computational social science, the importance of knowledge sharing in the age of AI, and AI’s implications on connectivity and curiosity.
*unSILOed Podcast is produced by University FM.*
Episode Quotes:
What impedes the transmission of good ideas from one part of the organization to the other?
13:34: Organizations are always, today, siloed, and they're siloed for a variety of reasons. One is that we're not so smart. We can only understand a certain amount really well, and when it gets more complex than that, we go create another silo. We break it apart and create another silo, and people don't like to share things out of their silo because that means they're not valuable anymore. You like to control the staff and believe that your expertise is valuable. And if someone comes and says, "Well, give me all your data," that's like, "Hey, we're going to fire you in a month," right? People don't like to do that. And the data that they have is not contextualized, right? It means something to them, but somebody else looking at it will say, "Well, I don't know, it's a bunch of ones and zeros. What does that mean?" Right? You need to know what it's measuring and why. The intention is usually not in the data; it's in the context around it.
Does the rise of new AI tools stimulate curiosity or potentially dampen it?
59:34: The phenomenon of people not meeting other humans because they're all on social media is real and disturbing, and AI can make it worse. That's not quite the same thing as being curious. They are curious, just about things that are not actually in their physical environment or likely to affect them in that sort of immediate way.
Why smaller teams move faster and learn better
44:52: The smaller teams are necessary to move fast. If you have a big team, it's really hard to get everybody educated and on the same page. But with software and the sort of things that they do, you can do it with small teams. And so, you can have small teams, which means you can get consensus about what to do pretty fast. And, of course, the connections to everybody else mean you can look for opportunities and learn from other people much better than other people. So, it's a win-win thing. You get people that are able to put things out there quicker; they're able to learn from all the other people that have done it better. And it works pretty well.
Show Links:
Recommended Resources:
Guest Profile:
Faculty Profile at MIT
Professional Profile on LinkedIn
Human Centered AI Institute at Stanford University