Description: Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.
Click any word to see translations, usage examples & similar words. Then learn them using saved words.
Text not synced with the audio? See here for why certain podcasts won't sync.