• This repository has been archived on 19/May/2022
  • Stars
    star
    964
  • Rank 47,452 (Top 1.0 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created over 4 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

list of efficient attention modules

awesome-fast-attention Awesome

A curated list of efficient attention modules (last update: Wed, 10 Mar 2021 23:52:22 +0000)

Table of Contents

Efficient Attention

Paper (citations) Implementation Computational Complexity AutoRegressive Main Idea
Generating Wikipedia by Summarizing Long Sequences (282) memory-compressed-attention formula ✔️
EXPAND

compresses key and value + blocked attention

CBAM: Convolutional Block Attention Module (999+) attention-module formula
EXPAND

combines the SE attention with a per pixel(local) weight

Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks (16) set_transformer formula
EXPAND

uses K relay nodes

CCNet: Criss-Cross Attention for Semantic Segmentation (296) CCNet formula
EXPAND

each pixel attends to its row and column simultaneously

Efficient Attention: Attention with Linear Complexities (16) efficient-attention formula
EXPAND

Softmax(Q)*(Softmax(K^T)*V)

Star-Transformer (40) fastNLP formula
EXPAND

uses a relay(global) node and attends to/from that node

GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond (199) GCNet formula
EXPAND

squeeze and excitation with an attention pooling (instead of a GAP)

Generating Long Sequences with Sparse Transformers (257) DeepSpeed formula ✔️
EXPAND

sparse block based attention

SCRAM: Spatially Coherent Randomized Attention Maps (1) - formula ✔️
EXPAND

uses PatchMatch to find close keys

Interlaced Sparse Self-Attention for Semantic Segmentation (24) IN_PAPER formula ✔️
EXPAND

combination of a short length and then long range(dilated) attention

Permutohedral Attention Module for Efficient Non-Local Neural Networks (3) Permutohedral_attention_module formula
EXPAND

uses permutohedral lattice approximation algorithm to approximate the attention output

Large Memory Layers with Product Keys (43) XLM formula ✔️
EXPAND

search for nearest neighbor keys

Expectation-Maximization Attention Networks for Semantic Segmentation (79) EMANet formula
EXPAND

applys expectation maximization to cluster keys into k clusters

BP-Transformer: Modelling Long-Range Context via Binary Partitioning (15) BPT formula ✔️
EXPAND

attends to distant tokens coarsely and attends to close tokens in a more fine-grained manner

Compressive Transformers for Long-Range Sequence Modelling (48) compressive-transformer-pytorch formula ✔️
EXPAND

compresses distant tokens instead of just stop_grad() ing them, more efficient version of transformerXL

Axial Attention in Multidimensional Transformers (36) axial-attention formula ✔️
EXPAND

apply attention on each axis separately

Reformer: The Efficient Transformer (216) trax formula ✔️
EXPAND

uses LSH to find close keys

Sparse Sinkhorn Attention (16) sinkhorn-transformer formula ✔️
EXPAND

uses a cost matrix to limit attention between buckets

Transformer on a Diet (2) transformer-on-diet formula ✔️
EXPAND

dilated transformer like wavenet

Time-aware Large Kernel Convolutions (9) TaLKConvolutions formula ✔️
EXPAND

calculate mean over a dynamic subsequence around each token with the help of summed-area table

SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection (2) - formula ✔️
EXPAND

learns the q, k connections == dynamically creates a sparse attention matrix

Efficient Content-Based Sparse Attention with Routing Transformers (38) routing-transformer formula ✔️
EXPAND

computes attention with same-cluster tokens (computed by online k-means)

Neural Architecture Search for Lightweight Non-Local Networks (11) AutoNL formula
EXPAND

computes Q(KV) and also down samples q, k, v both in spatial and channel dimensions

Longformer: The Long-Document Transformer (159) longformer formula ✔️
EXPAND

global + blocked attention

ETC: Encoding Long and Structured Inputs in Transformers (16) - formula
EXPAND

combines global attention (star transformer with multiple global tokens) with local attention

Multi-scale Transformer Language Models (2) IN_PAPER formula ✔️
EXPAND

UNet like + retina attetion is something close to BP-Transformer

Synthesizer: Rethinking Self-Attention in Transformer Models (26) Synthesizer-Rethinking-Self-Attention-Transformer-Models formula ✔️
EXPAND

does not compute pairwise interactions

Jukebox: A Generative Model for Music (45) jukebox formula ✔️
EXPAND

better attention patterns from Sparse Transformer

Input-independent Attention Weights Are Expressive Enough: A Study of Attention in Self-supervised Audio Transformers (0) - formula ✔️
EXPAND

does not compute pairwise interactions and uses fixed mask patters

GMAT: Global Memory Augmentation for Transformers (2) gmat formula
EXPAND

adds global tokens

Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (45) fast-transformers formula ✔️
EXPAND

uses phi(q)(phi(k)v) and also improves the sequential sampling step

Linformer: Self-Attention with Linear Complexity (47) linformer-pytorch formula
EXPAND

project key and value from nd to kd

Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers (8) google-research formula ✔️
EXPAND

calculate an unbiased stochastic approximation of the attention matrix

Kronecker Attention Networks (1) kronecker-attention-pytorch formula
EXPAND

uses horizontal and lateral average matrices

Real-time Semantic Segmentation with Fast Attention (5) - formula
EXPAND

l2_norm(q)*(l2_norm(k)*v)

Fast Transformers with Clustered Attention (6) fast-transformers formula
EXPAND

groups queries together with LSH

Big Bird: Transformers for Longer Sequences (60) DeepSpeed formula
EXPAND

ETC with random connections

Tensor Low-Rank Reconstruction for Semantic Segmentation (3) - formula
EXPAND

decompose the full attention tensor into rank one tensors (CP decomposition)

Looking for change? Roll the Dice and demand Attention (0) IN_PAPER formula
EXPAND

uses the fractal tanimoto similarity to compare queries with keys inside the attention module

Rethinking Attention with Performers (30) google-research formula ✔️
EXPAND

unbiased approximation of the attention matrix with softmax kernel

Memformer: The Memory-Augmented Transformer (0) memformer formula ✔️
EXPAND

attend to memory slots + Memory-Replay BackPropagation

SMYRF: Efficient Attention using Asymmetric Clustering (1) smyrf formula
EXPAND

LSH with balanced clusters

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting (0) Informer2020 formula ✔️
EXPAND

sparse attention + funnel like encoder

Sub-Linear Memory: How to Make Performers SLiM (0) google-research formula ✔️
EXPAND

Performer but with sublinear Memory usage

Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (0) Nystromformer formula
EXPAND

uses Nystrom method to approximate the attention matrix

Linear Transformers Are Secretly Fast Weight Memory Systems (0) fast-weight-transformers formula ✔️
EXPAND

show that linear transformers are basically fast weight networks + propose a new kernel function to linearise attention, balancing simplicity and effectiveness

LambdaNetworks: Modeling Long-Range Interactions Without Attention (6) lambda-networks formula ✔️
EXPAND

generates a linear layer based on context + decouple pos/context

Random Feature Attention (2) - formula ✔️
EXPAND

kernel approximation and also transformers are rnn

Articles/Surveys/Benchmarks