Want to create an interactive transcript for this episode?
Podcast: Arxiv Papers
Episode: [QA] Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation