• Stars
    star
    3,673
  • Rank 11,488 (Top 0.3 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created almost 2 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Fast and memory-efficient exact attention

FlashAttention

This repository provides the official implementation of FlashAttention from the following paper.

FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré
Paper: https://arxiv.org/abs/2205.14135
IEEE Spectrum article about our submission to the MLPerf 2.0 benchmark using FlashAttention. FlashAttention

Usage

We've been very happy to see FlashAttention being widely adopted in such a short time after its release. This page contains a partial list of places where FlashAttention is being used.

Full model code and training script

We have released the full GPT model implementation. We also provide optimized implementations of other layers (e.g., MLP, LayerNorm, cross-entropy loss, rotary embedding). Overall this speeds up training by 3-5x compared to the baseline implementation from Huggingface, reaching up to 189 TFLOPs/sec per A100, equivalent to 60.6% model FLOPs utilization (we don't need any activation checkpointing).

We also include a training script to train GPT2 on Openwebtext and GPT3 on The Pile.

Triton implementation of FlashAttention

Phil Tillet (OpenAI) has an experimental implementation of FlashAttention in Triton: https://github.com/openai/triton/blob/master/python/tutorials/06-fused-attention.py

As Triton is a higher-level language than CUDA, it might be easier to understand and experiment with. The notations in the Triton implementation are also closer to what's used in our paper.

We also have an experimental implementation in Triton that support attention bias (e.g. ALiBi): https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/flash_attn_triton.py

Installation and features

Requirements:

  • CUDA 11.4 and above.
  • PyTorch 1.12 and above.

We recommend the Pytorch container from Nvidia, which has all the required tools to install FlashAttention.

To install:

pip install flash-attn

Alternatively you can compile from source:

python setup.py install

Interface: src/flash_attention.py

To run the benchmark against PyTorch standard attention:

PYTHONPATH=$PWD python benchmarks/benchmark_flash_attention.py

FlashAttention currently supports:

  1. Turing, Ampere, Ada, or Hopper GPUs (e.g., H100, A100, RTX 3090, T4, RTX 2080).
  2. fp16 and bf16 (bf16 requires Ampere, Ada, or Hopper GPUs).
  3. Head dimensions that are multiples of 8, up to 128 (e.g., 8, 16, 24, ..., 128). Head dim > 64 backward requires A100 or H100.

Our tentative roadmap:

  1. [Jun 2022] Make package pip-installable[Done, thanks to lucidrains].
  2. [Jun 2022] Support SM86 GPUs (e.g., RTX 3080, 3090)[Done].
  3. [Jun 2022] Support SM75 GPUs (e.g. T4)[Done].
  4. [Jun 2022] Support bf16[Done].
  5. [Jul 2022] Implement cross-attention[Done].
  6. [Jul 2022] Support head dimension 128[Done].
  7. [Aug 2022] Fuse rotary embedding[Done].
  8. [Mar 2023] Support SM90 GPUs (H100)[Done].

How to use FlashAttention

Here's a simple example:

import torch
from flash_attn.flash_attention import FlashMHA

# Replace this with your correct GPU device
device = "cuda:0"

# Create attention layer. This is similar to torch.nn.MultiheadAttention,
# and it includes the input and output linear layers
flash_mha = FlashMHA(
    embed_dim=128, # total channels (= num_heads * head_dim)
    num_heads=8, # number of heads
    device=device,
    dtype=torch.float16,
)

# Run forward pass with dummy data
x = torch.randn(
    (64, 256, 128), # (batch, seqlen, embed_dim)
    device=device,
    dtype=torch.float16
)

output = flash_mha(x)[0]

Alternatively, you can import the inner attention layer only (so that the input and output linear layers are not included):

from flash_attn.flash_attention import FlashAttention

# Create the nn.Module
flash_attention = FlashAttention()

Or, if you need more fine-grained control, you can import one of the lower-level functions (this is more similar to the torch.nn.functional style):

from flash_attn.flash_attn_interface import flash_attn_unpadded_func

# or

from flash_attn.flash_attn_interface import flash_attn_unpadded_qkvpacked_split_func

# etc.

There are also separate Python files with various FlashAttention extensions:

# Import the triton implementation (torch.nn.functional version only)
from flash_attn.flash_attn_triton import flash_attn_func

# Import block sparse attention (nn.Module version)
from flash_attn.flash_blocksparse_attention import FlashBlocksparseMHA, FlashBlocksparseAttention

# Import block sparse attention (torch.nn.functional version)
from flash_attn.flash_blocksparse_attn_interface import flash_blocksparse_attn_func

Speedup and Memory Savings

We present expected speedup (combined forward + backward pass) and memory savings from using FlashAttention against PyTorch standard attention, depending on sequence length, on different GPUs (speedup depends on memory bandwidth - we see more speedup on slower GPU memory).

We currently have benchmarks for these GPUs:

A100

We display FlashAttention speedup using these parameters (similar to BERT-base):

  • Batch size 8
  • Head dimension 64
  • 12 attention heads

Our graphs show sequence lengths between 128 and 4096 (when standard attention runs out of memory on an A100), but FlashAttention can scale up to sequence length 64K.

Speedup

FlashAttention speedup

We generally see 2-4X speedup at sequence lengths between 128 and 4K, and we see more speedup when using dropout and masking, since we fuse the kernels. At sequence lengths that are popular with language models like 512 and 1K, we see speedups up to 4X when using dropout and masking.

Memory

FlashAttention memory

We show memory savings in this graph (note that memory footprint is the same no matter if you use dropout or masking). Memory savings are proportional to sequence length -- since standard attention has memory quadratic in sequence length, whereas FlashAttention has memory linear in sequence length. We see 10X memory savings at sequence length 2K, and 20X at 4K. As a result, FlashAttention can scale to much longer sequence lengths.

Head Dimension 128

FlashAttention speedup, head dimension 128

We show speedup with head dimension 128. Here we show batch size 16 with 12 heads. Speedup is less than with the smaller head sizes, since we have to make the block size smaller in the tiling. But speedup is still significant, especially with a causal mask.

RTX 3090

For the RTX 3090, we use batch size 12 with 12 attention heads. Memory savings are the same as on an A100, so we'll only show speedup here.

FlashAttention speedup GTX 3090

We see slightly higher speedups (between 2.5-4.5x) on the GTX 3090, since memory bandwidth on the GDDR6X is lower than A100 HBM (~900 GB/s vs. ~1.5 TB/s).

T4

We again use batch size 12 with 12 attention heads.

Flashattention speedup T4

T4 SRAM is smaller than the newer GPUs (64 KB), so we see less speedup (we need to make the block sizes smaller, so we end up doing more R/W). This matches the IO complexity analysis from section 3.2 of our paper.

T4 GPUs are commonly used for inference, so we also measure speedup on the forward pass only (note that these are not directly comparable to the graphs above):

FlashAttention speedup T4 fwd

We see speedups between 2.5x-4.5x on the forward pass.

Tests

We test that FlashAttention produces the same output and gradient as a reference implementation, up to some numerical tolerance. In particular, we check that the maximum numerical error of FlashAttention is at most twice the numerical error of a baseline implementation in Pytorch (for different head dimensions, input dtype, sequence length, causal / non-causal).

To run the tests:

pytest -q -s tests/test_flash_attn.py

When you encounter issues

This alpha release of FlashAttention contains code written for a research project to validate ideas on speeding up attention. We have tested it on several models (BERT, GPT2, ViT). However, there might still be bugs in the implementation that we hope to iron out in the next few months.

If you encounter any of these bugs, please open a respective GitHub Issue!

Acknowledgments

Our implementation uses Apex's FMHA code as a starting point.

We thank Young-Jun Ko for the in-depth explanation of his FMHA implementation and for his thoughtful answers to our questions about CUDA.

Citation

If you use this codebase, or otherwise found our work valuable, please cite:

@inproceedings{dao2022flashattention,
  title={Flash{A}ttention: Fast and Memory-Efficient Exact Attention with {IO}-Awareness},
  author={Dao, Tri and Fu, Daniel Y. and Ermon, Stefano and Rudra, Atri and R{\'e}, Christopher},
  booktitle={Advances in Neural Information Processing Systems},
  year={2022}
}

More Repositories

1

deepdive

DeepDive
Shell
1,944
star
2

state-spaces

Sequence Modeling with Structured State Spaces
Jupyter Notebook
1,372
star
3

data-centric-ai

Resources for Data Centric AI
TeX
1,070
star
4

safari

Convolutions for Sequence Modeling
Assembly
817
star
5

meerkat

Creative interactive views of any dataset.
Python
807
star
6

hgcn

Hyperbolic Graph Convolutional Networks in PyTorch.
Python
537
star
7

ama_prompting

Ask Me Anything language model prompting
Python
524
star
8

m2

Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
Assembly
499
star
9

H3

Language Modeling with the H3 State Space Model
Assembly
477
star
10

hyena-dna

Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena
Assembly
474
star
11

evaporate

This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"
Python
456
star
12

manifest

Prompt programming with FMs.
Python
428
star
13

metal

Snorkel MeTaL: A framework for training models with multi-task weak supervision
Python
420
star
14

fonduer

A knowledge base construction engine for richly formatted data
Python
401
star
15

pdftotree

🌲 A tool for converting PDF into hOCR with text, tables, and figures being recognized and preserved.
Python
396
star
16

hyperbolics

Hyperbolic Embeddings
Python
362
star
17

flyingsquid

More interactive weak supervision with FlyingSquid
Python
309
star
18

legalbench

An open science effort to benchmark legal reasoning in foundation models
Python
278
star
19

KGEmb

Hyperbolic Knowledge Graph embeddings.
Python
234
star
20

flash-fft-conv

FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
C++
211
star
21

bootleg

Self-Supervision for Named Entity Disambiguation at the Tail
Python
210
star
22

aisys-building-blocks

Building blocks for foundation models.
202
star
23

HypHC

Hyperbolic Hierarchical Clustering.
Python
181
star
24

TART

TART: A plug-and-play Transformer module for task-agnostic reasoning
Python
175
star
25

tanda

Learning to Compose Domain-Specific Transformations for Data Augmentation
Python
169
star
26

fly

Python
162
star
27

butterfly

Butterfly matrix multiplication in PyTorch
Python
153
star
28

spacetime

Code for SpaceTime 🌌⏱️. Proposed in Effectively Modeling Time Series with Simple Discrete State Spaces, ICLR 2023.
Python
146
star
29

babble

A system for generating training labels via natural language explanations
Python
144
star
30

EmptyHeaded

Your worst case is our best case.
C++
136
star
31

domino

Python
132
star
32

zoology

Understand and test language model architectures on synthetic tasks.
Python
131
star
33

blocking-tutorial

C++
126
star
34

based

Code for exploring Based models
Python
117
star
35

hippo-code

Python
116
star
36

mindbender

Tools for iterative knowledge base development with DeepDive
CoffeeScript
116
star
37

reef

Automatically labeling training data
Jupyter Notebook
103
star
38

fonduer-tutorials

A collection of simple tutorials for using Fonduer
Jupyter Notebook
100
star
39

TreeStructure

Table Extraction Tool
Jupyter Notebook
90
star
40

fm_data_tasks

Foundation Models for Data Tasks
Python
88
star
41

epoxy

Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings
Python
76
star
42

CaffeConTroll

C++
75
star
43

structured-nets

Structured matrices for compressing neural networks
Python
64
star
44

HoroPCA

Hyperbolic PCA via Horospherical Projections
Python
63
star
45

hidden-stratification

Combating hidden stratification with GEORGE
Jupyter Notebook
56
star
46

numbskull

Numba-based version of DimmWitted Gibbs sampler
Python
45
star
47

model-patching

Model Patching: Closing the Subgroup Performance Gap with Data Augmentation
Python
42
star
48

cs145-notebooks-2016

Public materials for the Fall 2016 offering of CS145
Jupyter Notebook
35
star
49

mandoline

(ICML 2021) Mandoline: Model Evaluation under Distribution Shift
Python
30
star
50

mongoose

A Learnable LSH Framework for Efficient NN Training
Python
28
star
51

skill-it

Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models
Jupyter Notebook
27
star
52

thanos-code

Code release for the paper Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Python
24
star
53

tuffy

Tuffy, a Markov Logic Network solver
Java
23
star
54

snorkel-superglue

Applying Snorkel to SuperGLUE
Jupyter Notebook
23
star
55

ukb-cardiac-mri

Weakly Supervised MRI Series Classification for the UK Biobank
Python
22
star
56

ludwig-benchmarking-toolkit

Ludwig benchmark
Python
19
star
57

ddlog

Compiler for writing DeepDive applications in a Datalog-like language — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👇🏿
Scala
19
star
58

augmentation_code

Reproducible code for Augmentation paper
Python
18
star
59

correct-n-contrast

Official code repository for Correct-N-Contrast
Python
18
star
60

smallfry

Python
18
star
61

tabi

Code release for Type-Aware Bi-Encoders for Open-Domain Entity Retrieval
Python
17
star
62

lp_rffs

Low precision random Fourier features for kernel approximation
Python
17
star
63

sampler

DimmWitted Gibbs Sampler in C++ — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👉🏿
C++
17
star
64

random_embedding

Python
16
star
65

snorkel-biocorpus

Python
16
star
66

bazaar

JavaScript
14
star
67

ddbiolib

DeepDive Biomedical Tools
Python
13
star
68

anchor-stability

A study of the downstream instability of word embeddings
Jupyter Notebook
12
star
69

Omnivore

Omnivore Optimizer and Distributed CcT
C++
12
star
70

embroid

Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification
Jupyter Notebook
11
star
71

dd-genomics

The Genomics DeepDive project
Python
11
star
72

dimmwitted

C++
10
star
73

medical-ned-integration

Cross-domain data integration for named entity disambiguation in biomedical text
Python
10
star
74

torchhalp

Python
9
star
75

cross-modal-ws-demo

HTML
9
star
76

liger

Liger: Fusing Weak Supervision and Model Embeddings
Python
8
star
77

treedlib

Jupyter Notebook
8
star
78

Accelerated-PCA

Accelerated Stochastic Power Iteration with Momentum
Jupyter Notebook
8
star
79

hyperE

HTML
7
star
80

chinstrap

C++
6
star
81

ivy-tutorial

An Introductory Tutorial for Ivy
Jupyter Notebook
6
star
82

quadrature-features

Code to generate kernel features using Gaussian quadrature
Python
5
star
83

icij-maude

Weakly supervised classification of adverse event reports from the FDA's MAUDE database.
Python
5
star
84

observational

Observational Supervision for Medical Image Classification using Gaze Data
Jupyter Notebook
5
star
85

librarian

DeepDive Librarian for managing all data sets we publish and receive
Python
3
star
86

halp

Python
3
star
87

bert-pretraining

Python
2
star
88

d3m-model-search

D3M Model Search Component
Python
2
star
89

elementary

Data services and APIs
Python
1
star
90

dependency_model

Structure learning code from [ICML'19 paper](https://arxiv.org/abs/1903.05844)
Python
1
star