Episode 523: Ethan Mollick

Listen to Episode on:

Watch the Unabridged Interview:

Video Block
Double-click here to add a video by URL or embed code. Learn more

Order Books

 

AI as a Colleague, Not a Replacement

It’s official: AI has arrived and, from here on out, will be a part of our world. So how do we begin to learn how to coexist with our new artificial coworkers? 

Ethan Mollick is an associate professor at University of Pennsylvania’s Wharton School and the author of Co-Intelligence: Living and Working with AI. The book acts as a guide to readers navigating the new world of AI and explores how we might work alongside AI. 

He and Greg discuss the benefits of anthropomorphizing AI, the real impact the technology could have on employment, and how we can learn to co-work and co-learn with AI. 

*unSILOed Podcast is produced by University FM.*

Episode Quotes:

The result of an experiment identifying the impact of GEN AI

07:35 We went to the Boston Consulting Group, one of the elite consulting companies, and we gave them 18 realistic business tasks we created with them and these were judged to be very realistic. They were used to do actual evaluations of people in interviews and so on. And we got about 8 percent of the global workforce of BCG, which is a significant investment. And we had them do these tasks first on their own without AI, and then we had them do a second set of tasks either with or without AI. So, random selection to those two. The people who got access to AI, and by the way, this is just plain vanilla GPT-4 as of last April. No special fine-tuning, no extra details, no special interface, no RAG, nothing else. And they had a 40 percent improvement in the quality of their outputs on every measure that we had. We got work done about 25 percent faster, about 12.5 percent more work done in the same time period. Pretty big results in a pretty small period of time. 

Is AI taking over our jobs? 

20:30 The ultimate question is: How good does AI get, and how long does it take to get that good? And I think if we knew the answer to that question, which we don't, that would teach us a lot about what jobs to think about and worry about.

Will there be a new data war where different LLM and Gen AI providers chase proprietary data?

11:17 I don't know whether this becomes like a data fight in that way because the open internet has tons of data on it, and people don't seem to be paying for permission to train on those. I think we'll see more specialized training data potentially in the future, but things like conversations, YouTube videos, podcasts are also useful data sources. So the whole idea of LLMs is that they use unsupervised learning. You throw all this data at them; they figure out the patterns.

Could public data be polluted by junk and bad actors?

16:39 Data quality is obviously going to be an issue for these systems. There are lots of ways of deceiving them, of hacking them, of working like a bad actor. I don't necessarily think it's going to be by poisoning the datasets themselves because the datasets are the Internet, Project Gutenberg, and Wikipedia. They're pretty resistant to that kind of mass poisoning, but I think data quality is an issue we should be concerned about.

Show Links:

Recommended Resources:

Guest Profile:

His Work:

Next
Next

Episode 522: Sophia Rosenfeld