There are no reviews yet. Be the first to send feedback to the community and the maintainers!
Repository Details
Given an annotated utterance (x, y), we encode x with an encoder (LSTM, Transformer) and cache similar latent representations generated during training into an external memory. Storage and retrieval operations are differentiable using attention over memory entries and extend encoder's capacity.