A curated list of efficient attention modules (last update: Fri, 11 Sep 2020 13:26:27 +0000)
Paper (citations) | Implementation | Complexity | AutoRegressive | Main Idea |
---|---|---|---|---|
Generating Wikipedia by Summarizing Long Sequences (212) | memory-compressed-attention |
❌ | EXPANDcompresses key and value + blocked attention |
|
CBAM: Convolutional Block Attention Module (738) | attention-module |
❌ | EXPANDcombines the SE attention with a per pixel(local) weight |
|
CCNet: Criss-Cross Attention for Semantic Segmentation (166) | CCNet |
❌ | EXPANDeach pixel attends to its row and column simultaneously |
|
Efficient Attention: Attention with Linear Complexities (2) | efficient-attention |
❌ | EXPANDSoftmax(Q)*(Softmax(K^T)*V) |
|
Star-Transformer (27) | fastNLP |
❌ | EXPANDuses a relay(global) node and attends to/from that node |
|
GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond (112) | GCNet |
❌ | EXPANDsqueeze and excitation with an attention pooling (instead of a GAP) |
|
Generating Long Sequences with Sparse Transformers (154) | DeepSpeed |
✔️ | EXPANDsparse block based attention |
|
SCRAM: Spatially Coherent Randomized Attention Maps (1) | - | ✔️ | EXPANDuses PatchMatch to find close keys |
|
Interlaced Sparse Self-Attention for Semantic Segmentation (15) | IN_PAPER | ✔️ | EXPANDcombination of a short length and then long range(dilated) attention |
|
Permutohedral Attention Module for Efficient Non-Local Neural Networks (2) | Permutohedral_attention_module |
❌ | EXPANDuses permutohedral lattice approximation algorithm to approximate the attention output |
|
Large Memory Layers with Product Keys (31) | XLM |
✔️ | EXPANDsearch for nearest neighbor keys |
|
Expectation-Maximization Attention Networks for Semantic Segmentation (42) | EMANet |
❌ | EXPANDapplys expectation maximization to cluster keys into k clusters |
|
Compressive Transformers for Long-Range Sequence Modelling (21) | compressive-transformer-pytorch |
✔️ | EXPANDcompresses distant tokens instead of just stop_grad() ing them, more efficient version of transformerXL |
|
BP-Transformer: Modelling Long-Range Context via Binary Partitioning (9) | BPT |
✔️ | EXPANDattends to distant tokens coarsely and attends to close tokens in a more fine-grained manner |
|
Axial Attention in Multidimensional Transformers (5) | axial-attention |
✔️ | EXPANDapply attention on each axis separately |
|
Reformer: The Efficient Transformer (79) | trax |
✔️ | EXPANDuses LSH to find close keys |
|
Transformer on a Diet (2) | transformer-on-diet |
✔️ | EXPANDdilated transformer like wavenet |
|
Sparse Sinkhorn Attention (4) | sinkhorn-transformer |
✔️ | EXPANDuses a cost matrix to limit attention between buckets |
|
SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection (1) | - | ✔️ | EXPANDlearns the q, k connections == dynamically creates a sparse attention matrix |
|
Efficient Content-Based Sparse Attention with Routing Transformers (12) | routing-transformer |
✔️ | EXPANDcomputes attention with same-cluster tokens (computed by online k-means) |
|
ETC: Encoding Long and Structured Data in Transformers (2) | - | ❌ | EXPANDcombines global attention (star transformer with multiple global tokens) with local attention |
|
Longformer: The Long-Document Transformer (21) | longformer |
✔️ | EXPANDglobal + blocked attention |
|
Neural Architecture Search for Lightweight Non-Local Networks (4) | AutoNL |
❌ | EXPANDcomputes Q(KV) and also down samples q, k, v both in spatial and channel dimensions |
|
Multi-scale Transformer Language Models (1) | IN_PAPER | ✔️ | EXPANDUNet like + retina attetion is something close to BP-Transformer |
|
Jukebox: A Generative Model for Music (13) | jukebox |
✔️ | EXPANDbetter attention patterns from Sparse Transformer |
|
Synthesizer: Rethinking Self-Attention in Transformer Models (9) | - | ✔️ | EXPANDdoes not compute pairwise interactions |
|
GMAT: Global Memory Augmentation for Transformers (0) | gmat |
❌ | EXPANDadds global tokens |
|
Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer (0) | - | ✔️ | EXPANDdoes not compute pairwise interactions and uses fixed mask patters |
|
Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers (1) | google-research |
✔️ | EXPANDcalculate an unbiased stochastic approximation of the attention matrix |
|
Linformer: Self-Attention with Linear Complexity (3) | linformer-pytorch |
❌ | EXPANDproject key and value from nd to kd |
|
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (3) | fast-transformers |
✔️ | EXPANDuses phi(q)(phi(k)v) and also improves the sequential sampling step |
|
Real-time Semantic Segmentation with Fast Attention (0) | - | ❌ | EXPANDl2_norm(q)*(l2_norm(k)*v) |
|
Fast Transformers with Clustered Attention (0) | fast-transformers |
❌ | EXPANDgroups queries together with LSH |
|
Kronecker Attention Networks (0) | kronecker-attention-pytorch |
❌ | EXPANDuses horizontal and lateral average matrices |
|
Big Bird: Transformers for Longer Sequences (2) | - | ❌ | EXPANDETC with random connections |
|
Tensor Low-Rank Reconstruction for Semantic Segmentation (1) | - | ❌ | EXPANDdecompose the full attention tensor into rank one tensors (CP decomposition) |
|
Looking for change? Roll the Dice and demand Attention (0) | IN_PAPER | ❌ | EXPANDuses the fractal tanimoto similarity to compare queries with keys inside the attention module |