• Stars
    star
    3,193
  • Rank 14,059 (Top 0.3 %)
  • Language
    C++
  • License
    Other
  • Created almost 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

LightSeq: A High Performance Library for Sequence Processing and Generation

LightSeq: A High Performance Library for Sequence Processing and Generation

logo


Table Of Contents

Release Notes

[2022.10.25] Release v3.0.0 version, which supports int8 mixed-precision training and inference. [中文介绍]

[2021.06.18] Release v2.0.0 version, which supports fp16 mixed-precision training. [中文介绍]

[2019.12.06] Release v1.0.0 version, which supports fp16 mixed-precision inference. [中文介绍]

Introduction

LightSeq is a high performance training and inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP and CV models such as BERT, GPT, Transformer, etc. It is therefore best useful for machine translation, text generation, image classification, and other sequence related tasks.

The library is built on top of CUDA official library(cuBLAS, Thrust, CUB) and custom kernel functions which are specially fused and optimized for Transformer model family. In addition to model components, the inference library also provide easy-to-deploy model management and serving backend based on TensorRT Inference Server. With LightSeq, one can easily develop modified Transformer architecture with little additional code.

LightSeq training and inference is very fast. Below is the overall performance:

  • LightSeq fp16 training achieves a speedup of up to 3x, compared to PyTorch fp16 training.
  • LightSeq int8 training achieves a speedup of up to 5x, compared to PyTorch QAT (i.e., quantization aware training).
  • LightSeq fp16 and int8 inference achieve a speedup of up to 12x and 15x, compared to PyTorch fp16 inference, respectively.

Support Matrix

LightSeq supports multiple features, which is shown in the table below.

Features Support List
Model Transformer, BERT, BART, GPT2, ViT, T5, MT5, XGLM, VAE, Multilingual, MoE
Layer embedding, encoder, decoder, criterion, optimizer
Precision fp32, fp16, int8
Mode training, inference
Compatibility Fairseq, Hugging Face, DeepSpeed
Decoding Algorithm beam search, diverse beam search, sampling, CRF
Others gradient communication quantization, auto-tune GEMM algorithm

The table below shows the running modes and precision currently supported by different models.

Models fp16 Training fp16 Inference int8 Training int8 Inference
Transformer Yes Yes Yes Yes
BERT Yes Yes Yes Yes
GPT2 Yes Yes Yes Yes
BART Yes Yes - -
T5 - Yes - -
MT5 - Yes - -
XGLM - Yes - -
ViT Yes Yes Yes Yes
VAE - Yes - -
Multilingual - Yes - Yes
MoE - Yes - -

Performance

We test the speedup of LightSeq training and inference using both fp16 and int8 mix-precision on Transformer and BERT models. The baseline is PyTorch fp16 mix-precision. Training experiments are tested on one A100 GPU and inference experiments are tested on eight A100 GPUs.

More performance results are available here.

Speedup of Transformer Training

Batch Token Size PyTorch QAT LightSeq fp16 LightSeq int8
512 0.36 1.99 1.86
1024 0.37 1.78 1.69
2048 0.37 1.56 1.50
4096 0.39 1.47 1.44
8192 0.41 1.44 1.44
15000 0.43 1.44 1.44

Speedup of BERT Training

Batch Token Size PyTorch QAT LightSeq fp16 LightSeq int8
8 0.45 2.12 1.99
16 0.44 1.92 1.80
32 0.42 1.59 1.52
64 0.46 1.62 1.58
128 0.46 1.74 1.70
256 0.46 1.68 1.73

Speedup of Transformer Inference

Batch Size Sequence Length LightSeq fp16 LightSeq int8
1 8 8.00 9.33
1 32 6.48 7.38
1 128 6.24 6.19
8 8 9.38 10.71
8 32 8.24 8.75
8 128 6.83 7.28
32 8 11.82 14.44
32 32 9.68 11.15
32 128 6.68 7.74

Speedup of BERT Inference

Batch Size Sequence Length LightSeq fp16 LightSeq int8
1 8 9.22 9.87
1 32 10.51 11.30
1 128 9.96 10.85
8 8 9.88 10.33
8 32 7.79 8.22
8 128 4.04 4.35
32 8 10.60 11.02
32 32 8.11 8.85
32 128 1.82 2.04

Installation

Install from PyPI

You can install LightSeq from PyPI, which only supports Python 3.6 to 3.8 on Linux:

pip install lightseq

Build from Source

You can also build from source:

PATH=/usr/local/hdf5/:$PATH ENABLE_FP32=0 ENABLE_DEBUG=0 pip install -e $PROJECT_DIR

Detailed building introduction is available here.

Getting Started

We provide several samples here to show the usage of LightSeq. Refer to the complete user guide and examples for more details.

LightSeq Training from Scratch

You can use the modules provided by LightSeq to build your own models. The following is an example of building a Transformer encoder layer.

First, import LightSeq Transformer encoder module:

from lightseq.training import LSTransformerEncoderLayer

Then create an encoder configuration, and create a LightSeq Transformer encoder layer initialized with the configuration:

config = LSTransformerEncoderLayer.get_config(
    max_batch_tokens=4096,
    max_seq_len=512,
    hidden_size=1024,
    intermediate_size=4096,
    nhead=16,
    attn_prob_dropout_ratio=0.1,
    activation_dropout_ratio=0.1,
    hidden_dropout_ratio=0.1,
    pre_layer_norm=True,
    activation_fn="relu",
    fp16=True,
    local_rank=0,
)
layer = LSTransformerEncoderLayer(config)

In addition to encoder layers, the other modules can be created using similar methods, and then be trained as normal PyTorch models.

More usage is available here.

LightSeq Training from Fairseq

LightSeq integrates all the fast and lightning modules into Fairseq.

First install the two following requirements:

pip install fairseq==0.10.2 sacremoses

You can train a fp16 mix-precision translation task on wmt14 en2de dataset by:

sh examples/training/fairseq/ls_fairseq_wmt14en2de.sh

(Optional) Then you can start int8 mix-precision training on the basis of fp16 pre-training models by:

sh examples/training/fairseq/ls_fairseq_quant_wmt14en2de.sh

More usage is available here.

LightSeq Training from Hugging Face BERT

LightSeq replaces the encoder layers of Hugging Face BERT with LightSeq fast layers.

First you should install these requirements:

pip install transformers seqeval datasets

Before doing next training, you need to switch to the following directory:

cd examples/training/huggingface/bert

Then you can easily fine-tune BERT for different tasks. Taking named entity recognition task as an example, you can train the BERT with fp16 mixed-precision using:

python task_ner/run_ner.sh

(Optional) You can also start int8 mix-precision training on the basis of fp16 pre-training models by:

python task_ner/run_quant_ner.sh

More usage is available here.

LightSeq Inference from Fairseq

After training using the above scripts, you can quickly infer the models using LightSeq.

You should transform the fp16 PyTorch weights to LightSeq protobuf or HDF5:

python export/fairseq/ls_fs_transformer_export.py

(Optional) You can also transform the int8 PyTorch weights to LightSeq protobuf or HDF5:

python export/fairseq/ls_fs_quant_transformer_export.py

Once obtaining the LightSeq weights, you can quickly infer them using the following code:

import lightseq.inference as lsi
model = lsi.Transformer(MODEL_PATH, MAX_BATCH_SIZE)
results = model.infer([[63, 47, 65, 1507, 88, 74, 10, 2057, 362, 9, 284, 6, 2, 1]])

Here MODEL_PATH is the path of your LightSeq weights and MAX_BATCH_SIZE is the maximal batch size of your input sentences.

You can also quickly infer the int8 LightSeq weights by replacing the lsi.Transformer with lsi.QuantTransformer.

More usage is available here.

LightSeq Inference from Hugging Face BERT

We provide an end2end bert-base example to see how fast Lightseq is compared to original Hugging Face.

First you should install the requirements and locate to the specified directory:

pip install transformers
cd examples/inference/python

Then you can check the performance by simply running the following commands. hf_bert_export.py is used to transform PyTorch weights to LightSeq protobuf or HDF5.

python export/huggingface/hf_bert_export.py
python test/ls_bert.py

More usage is available here.

LightSeq Deployment Using Inference Server

We provide a docker image which contains tritonserver and LightSeq's dynamic link library, and you can deploy an inference server by simply replacing the model file with your own model file.

sudo docker pull hexisyztem/tritonserver_lightseq:22.01-1

More usage is available here.

Cite Us

If you use LightSeq in your research, please cite the following papers.

@InProceedings{wang2021lightseq,
    title = "{L}ight{S}eq: A High Performance Inference Library for Transformers",
    author = "Wang, Xiaohui and Xiong, Ying and Wei, Yang and Wang, Mingxuan and Li, Lei",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers (NAACL-HLT)",
    month = jun,
    year = "2021",
    publisher = "Association for Computational Linguistics",
    pages = "113--120",
}

@article{wang2021lightseq2,
  title={LightSeq2: Accelerated Training for Transformer-based Models on GPUs},
  author={Wang, Xiaohui and Xiong, Ying and Qian, Xian and Wei, Yang and Li, Lei and Wang, Mingxuan},
  journal={arXiv preprint arXiv:2110.05722},
  year={2021}
}

We are Hiring!

The LightSeq team is hiring Interns and FTEs with backgrounds in deep learning system, natural language processing, computer vision, speech, etc. We are based in Beijing and Shanghai. If you are interested, please send your resume to [email protected].

More Repositories

1

IconPark

🍎Transform an SVG icon into multiple themes, and generate React icons,Vue icons,svg icons
TypeScript
8,298
star
2

xgplayer

A HTML5 video player with a parser that saves traffic
JavaScript
8,260
star
3

sonic

A blazingly fast JSON serializing & deserializing library
Assembly
6,870
star
4

monoio

Rust async runtime based on io-uring.
Rust
3,864
star
5

byteps

A high performance and generic framework for distributed DNN training
Python
3,603
star
6

ByteX

ByteX is a bytecode plugin platform based on Android Gradle Transform API and ASM. 字节码插件开发平台
Java
2,865
star
7

Elkeid

Elkeid is an open source solution that can meet the security requirements of various workloads such as hosts, containers and K8s, and serverless. It is derived from ByteDance's internal best practices.
Go
2,226
star
8

AlphaPlayer

AlphaPlayer is a video animation engine.
Java
2,181
star
9

scene

Android Single Activity Framework compatible with Fragment.
Java
2,097
star
10

bhook

🔥 ByteHook is an Android PLT hook library which supports armeabi-v7a, arm64-v8a, x86 and x86_64.
C
2,073
star
11

flutter_ume

UME is an in-app debug kits platform for Flutter. Produced by Flutter Infra team of ByteDance
Dart
2,053
star
12

terarkdb

A RocksDB compatible KV storage engine with better performance
C++
2,044
star
13

btrace

🔥🔥 btrace(AKA RheaTrace) is a high performance Android trace tool which is based on Perfetto, it support to define custom events automatically during building apk and using bhook to provider more native events like Render/Binder/IO etc.
Kotlin
1,913
star
14

gopkg

Universal Utilities for Go
Go
1,704
star
15

android-inline-hook

🔥 ShadowHook is an Android inline hook library which supports thumb, arm32 and arm64.
C
1,660
star
16

bitsail

BitSail is a distributed high-performance data integration engine which supports batch, streaming and incremental scenarios. BitSail is widely used to synchronize hundreds of trillions of data every day.
Java
1,627
star
17

go-tagexpr

An interesting go struct tag expression syntax for field validation, etc.
Go
1,470
star
18

GiantMIDI-Piano

Python
1,431
star
19

appshark

Appshark is a static taint analysis platform to scan vulnerabilities in an Android app.
Kotlin
1,363
star
20

AabResGuard

The tool of obfuscated aab resources.(Android app bundle资源混淆工具)
Java
1,307
star
21

piano_transcription

Python
1,247
star
22

CodeLocator

Kotlin
1,163
star
23

BoostMultiDex

BoostMultiDex is a solution for quickly loading multiple dex files on low Android version devices (4.X and below, SDK <21).
Java
1,106
star
24

music_source_separation

Python
1,039
star
25

Fastbot_Android

Fastbot(2.0) is a model-based testing tool for modeling GUI transitions to discover app stability problems
C++
1,031
star
26

SALMONN

SALMONN: Speech Audio Language Music Open Neural Network
Python
1,000
star
27

memory-leak-detector

C
919
star
28

fedlearner

A multi-party collaborative machine learning framework
Python
892
star
29

monolith

ByteDance's Recommendation System
Python
844
star
30

sonic-cpp

A fast JSON serializing & deserializing library, accelerated by SIMD.
C++
811
star
31

godlp

sensitive information protection toolkit
Go
770
star
32

MVDream

Multi-view Diffusion for 3D Generation
Python
744
star
33

res-adapter

Official implementation of "ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models".
Python
724
star
34

bytemd

ByteMD v1 repository
TypeScript
679
star
35

tailor

C
669
star
36

ibot

iBOT 🤖: Image BERT Pre-Training with Online Tokenizer (ICLR 2022)
Jupyter Notebook
663
star
37

RealRichText

A Tricky Solution for Implementing Inline-Image-In-Text Feature in Flutter.
Dart
658
star
38

guide

A new feature guide component by react 🧭
TypeScript
651
star
39

mockey

a simple and easy-to-use golang mock library
Go
622
star
40

magic-microservices

Make Web Components easier and powerful!😘
TypeScript
570
star
41

Fastbot_iOS

About Fastbot(2.0) is a model-based testing tool for modeling GUI transitions to discover app stability problems
Objective-C
553
star
42

flow-builder

A highly customizable streaming flow builder.
TypeScript
526
star
43

MVDream-threestudio

3D generation code for MVDream
Python
473
star
44

effective_transformer

Running BERT without Padding
C++
457
star
45

ByteTransformer

optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
C++
449
star
46

Next-ViT

Python
426
star
47

matxscript

A high-performance, extensible Python AOT compiler.
C++
408
star
48

byteir

A model compilation solution for various hardware
MLIR
362
star
49

syllepsis

Syllepsis is an out-of-the-box rich text editor.
TypeScript
355
star
50

uss

This is the PyTorch implementation of the Universal Source Separation with Weakly labelled Data.
Python
324
star
51

OMGD

Online Multi-Granularity Distillation for GAN Compression (ICCV2021)
Python
323
star
52

neurst

Neural end-to-end Speech Translation Toolkit
Python
298
star
53

danmu.js

HTML5 danmu (danmaku) plugin for any DOM element
JavaScript
292
star
54

vArmor

vArmor is a cloud native container sandbox system based on AppArmor/BPF/Seccomp. It also includes multiple built-in protection rules that are ready to use out of the box.
Go
263
star
55

particle-sfm

ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving Cameras in the Wild. ECCV 2022.
C++
263
star
56

CloudShuffleService

Cloud Shuffle Service(CSS) is a general purpose remote shuffle solution for compute engines, including Spark/Flink/MapReduce.
Java
245
star
57

lynx-llm

paper: https://arxiv.org/abs/2307.02469 page: https://lynx-llm.github.io/
Python
227
star
58

g3

Enterprise-oriented Generic Proxy Solutions
Rust
227
star
59

xgplayer-vue

Vue component for xgplayer, a HTML5 video player with a parser that saves traffic
JavaScript
219
star
60

DEADiff

[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"
Python
209
star
61

flux

A fast communication-overlapping library for tensor parallelism on GPUs.
C++
201
star
62

trace-irqoff

Interrupts-off or softirqs-off latency tracer
C
195
star
63

ParaGen

ParaGen is a PyTorch deep learning framework for parallel sequence generation.
Python
186
star
64

ByteMLPerf

AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and versatility of software and hardware.
Python
181
star
65

MoMA

MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation
Jupyter Notebook
177
star
66

AWERTL

An non-invasive iOS framework for quickly adapting Right-To-Left style UI
Objective-C
175
star
67

Bytedance-UnionAD

Ruby
170
star
68

keyhouse

Keyhouse is a skeleton of general-purpose Key Management System written in Rust.
Rust
163
star
69

react-model

The next generation state management library for React
TypeScript
162
star
70

LargeBatchCTR

Large batch training of CTR models based on DeepCTR with CowClip.
Python
162
star
71

ic_flow_platform

IFP (ic flow platform) is an integrated circuit design flow platform, mainly used for IC process specification management and data flow contral.
Python
154
star
72

DanmakuRenderEngine

DanmakuRenderEngine is a lightweight and scalable Android danmaku library. 轻量级高扩展安卓弹幕渲染引擎
Kotlin
149
star
73

primus

Java
148
star
74

diat

A CLI tool to help with diagnosing Node.js processes basing on inspector.
JavaScript
146
star
75

coconut_cvpr2024

Jupyter Notebook
143
star
76

Hammer

An efficient toolkit for training deep models.
Python
138
star
77

ns-x

An easy-to-use, flexible network simulator library in Go.
Go
116
star
78

pv3d

Python
113
star
79

fc-clip

This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP
Python
109
star
80

RLFN

Winner of runtime track in NTIRE 2022 challenge on Efficient Super-Resolution
Python
106
star
81

DCFrame

DCFrame is a Swift UI collection framework, which can easily create complex UI.
Swift
100
star
82

trace-noschedule

Trace noschedule thread
C
99
star
83

decoupleQ

A quantization algorithm for LLM
Cuda
99
star
84

tar-wasm

A faster experimental wasm-based tar implementation for browsers.
Rust
95
star
85

TWIST

Official codes: Self-Supervised Learning by Estimating Twin Class Distribution
Python
95
star
86

magic-portal

⚡ A blazing fast micro-component and micro-frontend solution uses web-components under the hood.
TypeScript
91
star
87

xgplayer-react

React component for xgplayer, a HTML5 video player with a parser that saves traffic
JavaScript
84
star
88

fe-foundation

UI Foundation for React Hooks and Vue Composition Api
TypeScript
80
star
89

nnproxy

Scalable NameNode RPC Proxy for HDFS Federation
Java
79
star
90

dbatman

Go
74
star
91

Elkeid-HUB

Elkeid HUB is a rule/event processing engine maintained by the Elkeid Team that supports streaming/offline (not yet supported by the community edition) data processing. The original intention is to solve complex data/event processing and external system linkage requirements through standardized rules.
Python
74
star
92

FreeSeg

Python
69
star
93

pull_to_refresh

Flutter pull_to_refresh widget
Dart
67
star
94

Jeddak-DPSQL

DPSQL (Privacy Protection SQL Query Service) - This project is a microservice Middleware located between the database engine ( Hive , Clickhouse , etc.) and the application system. It provides transparent SQL query result desensitization capabilities.
Python
62
star
95

terark-zip

A data structure and algorithm library built for TerarkDB
C++
62
star
96

trace-runqlat

C
61
star
97

ipmb

An interprocess message bus system built in Rust.
Rust
60
star
98

X-Portrait

Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"
Python
59
star
99

kernel

ByteDance kernel for use on cloud.
C
57
star
100

scroll_kit

Dart
56
star