Want to create an interactive transcript for this episode?
Podcast: Arxiv Papers
Episode: [QA] Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models