Want to create an interactive transcript for this episode?
Podcast: Arxiv Papers
Episode: RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression
Description: RocketKV is a training-free KV cache compression strategy that reduces memory demands during decoding, achieving up to 3x speedup and 31% memory reduction with minimal accuracy loss.https://arxiv.org/abs//2502.14051YouTube: https://www.youtube.com/@ArxivPapersTikTok: https://www.tiktok.com/@arxiv_papersApple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers