Want to create an interactive transcript for this episode?
Podcast: Machine Learning Street Talk (MLST)
Episode: Neel Nanda - Mechanistic Interpretability (Sparse Autoencoders)
Description: Neel Nanda, a senior research scientist at Google DeepMind, leads their mechanistic interpretability team. In this extensive interview, he discusses his work trying to understand how neural networks function internally. At just 25 years old, Nanda has quickly become a prominent voice in AI research after completing his pure mathematics degree at Cambridge in 2020.
Nanda reckons that machine learning is unique because we create neural networks that can perform impressive tasks (like complex reasoning and software engineering) without understanding how they work internally. He compares this to having computer programs that can do things no...