Want to create an interactive transcript for this episode?
Podcast: The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Episode: The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha
Description: Today we’re joined by David Ha, a research scientist at Google. In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models ha...