• Stars
    star
    1,442
  • Rank 31,477 (Top 0.7 %)
  • Language
    C++
  • License
    Other
  • Created about 4 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.

TurboTransformers: a fast and user-friendly runtime for transformer inference on CPU and GPU

logo

Make transformers serving fast by adding a turbo to your inference engine!

The WeChat AI open-sourced TurboTransformers with the following characteristics.

  1. Supporting both Transformers Encoder and Decoder.
  2. Supports Variable Length inputs. No time-consuming offline tuning is required. You can change batch size and sequence length at real-time.
  3. Excellent CPU / GPU performance.
  4. Perfect Usibility. TurboTransformers supports python and C++ APIs.
  5. Smart Batching. Minimize zero-padding overhead for a batch of requests of different lengths. It can be used as a plugin for pytorch. Tthe end-to-end acceleration is obtained by adding a few lines of python code.

TurboTransformers has been applied to multiple online BERT service scenarios in Tencent. For example, It brings 1.88x acceleration to the WeChat FAQ service, 2.11x acceleration to the public cloud sentiment analysis service, and 13.6x acceleration to the QQ recommendation system. Moreover, it has already been applied to build services such as Chitchating, Searching, and Recommendation.

The following table is a comparison of TurboTransformers and related work.

Related Works Performance Need Preprocess Variable Length Usage
pytorch JIT (CPU) Fast Yes No Hard
TensorRT (GPU) Fast Yes No Hard
tf-Faster Transformers (GPU) Fast Yes No Hard
ONNX-runtime (CPU/GPU) Fast/Fast No Yes Medium
tensorflow-1.x (CPU/GPU) Slow/Medium Yes No Easy
pytorch (CPU/GPU) Medium/Medium No Yes Easy
turbo-transformers (CPU/GPU) Fastest/Fastest No Yes Easy

Supported Models

We currently support the following transformer models.

Boost BERT Inference in 2 Lines of Python Code

import torch
import transformers
import turbo_transformers

if __name__ == "__main__":
    turbo_transformers.set_num_threads(4)
    torch.set_num_threads(4)
    model_id = "bert-base-uncased"
    model = transformers.BertModel.from_pretrained(model_id)
    model.eval()
    cfg = model.config

    input_ids = torch.tensor(
        ([12166, 10699, 16752, 4454], [5342, 16471, 817, 16022]),
        dtype=torch.long)
    position_ids = torch.tensor(([1, 0, 0, 0], [1, 1, 1, 0]), dtype=torch.long)
    segment_ids = torch.tensor(([1, 1, 1, 0], [1, 0, 0, 0]), dtype=torch.long)
    torch.set_grad_enabled(False)
    torch_res = model(
        input_ids, position_ids=position_ids, token_type_ids=segment_ids
    )  # sequence_output, pooled_output, (hidden_states), (attentions)
    torch_seqence_output = torch_res[0][:, 0, :]
    tt_model = turbo_transformers.BertModel.from_torch(model)
    res = tt_model(
        input_ids, position_ids=position_ids,
        token_type_ids=segment_ids)  # pooled_output, sequence_output
    tt_seqence_output = res[0]

Installation

Note that the building scripts only apply to specific OS and software (Pytorch, OpenNMT, transformers, etc.) versions. Please adjust them according to your needs.

CPU

git clone https://github.com/Tencent/TurboTransformers --recursive
  1. build docker images and containers on your machine.
sh tools/build_docker_cpu.sh
# optional: If you want to compare the performance of onnxrt-mkldnn during benchmark, you need to set BUILD_TYPE=dev to compile onnxruntime into the docker image, as follows
env BUILD_TYPE=dev sh tools/build_docker_cpu.sh
docker run -it --rm --name=turbort -v $PWD:/workspace your_image_name /bin/bash
  1. Install turbo in docker

Method 1: I want to unitest

cd /workspace
sh tools/build_and_run_unittests.sh $PWD -DWITH_GPU=OFF
# you can switch between Openblas and MKL by modifying this line in CMakeList.txt
# set(BLAS_PROVIDER "mkl" CACHE STRING "Set the blas provider library, in [openblas, mkl, blis]")

Method 2: I do not want to unitest

cd /workspace
mkdir -p build && cd build
cmake .. -DWITH_GPU=OFF
make -j 4
pip install `find . -name *whl`
  1. Run benchmark (optional) in docker, compare with pytorch, torch-JIT, onnxruntime
cd benchmark
bash run_benchmark.sh
  1. Install conda packages in docker (optional)
sh tool/build_conda_package.sh
# The conda package will be in /workspace/dist/*.tar.bz2
# When using turbo_transformers in other environments outside this container: conda install your_root_path/dist/*.tar.bz2

We also prepared a docker image containing CPU version of TurboTransformers, as well as other related works, i.e. onnxrt v1.2.0 and pytorch-jit on dockerhub

docker pull thufeifeibear/turbo_transformers_cpu:latest

GPU

git clone https://github.com/Tencent/TurboTransformers --recursive
  1. build docker images and containers on your machine.
# You can modify the environment variables in the script to specify the cuda version and operating system version
sh tools/build_docker_gpu.sh $PWD
nvidia-docker run --gpus all --net=host --rm -it -v $PWD:/workspace -v /etc/passwd:/etc/passwd --name=your_container_name REPOSITORY:TAG
# for example: nvidia-docker run --gpus all --net=host --rm -it -v $PWD:/workspace -v /etc/passwd:/etc/passwd --name=turbo_gpu_env thufeifeibear:0.1.1-cuda9.0-ubuntu16.04-gpu-dev
  1. Install pip package in docker and unitest test
cd /workspace
sh tools/build_and_run_unittests.sh $PWD -DWITH_GPU=ON
  1. Run benchmark (optional) in docker container, compare with pytorch
cd benchmark
bash gpu_run_benchmark.sh

We also prepared a docker image containing GPU version of TurboTransformers.

docker pull thufeifeibear/turbo_transformers_gpu:latest

Using Tensor Core (FP16)

Tensor Core can accelerate computing on GPU. It is disabled by default in TurboTransformers. If you want to turn it on, before compiling code, set option WITH_MODULE_BENCHMAKR ON in CMakeLists.txt

option(WITH_TENSOR_CORE     "Use Tensor core to accelerate"     ON)

Usage

TurboTransformers provides C++ / python API interfaces. We hope to do our best to adapt to a variety of online environments to reduce the difficulty of development for users.

Pretrained Model Loading

The first step in using turbo is to load a pre-trained model. We provide a way to load pytorch and tensorflow pre-trained models in huggingface/transformers. The specific conversion method is to use the corresponding script in ./tools to convert the pre-trained model into an npz format file, and turbo uses the C ++ or python interface to load the npz format model. In particular, we consider that most of the pre-trained models are in PyTorch format and used with python. We provide a shortcut for calling directly in python for the PyTorch saved model.

pretrained

APIs

python APIs

Refer to examples of supported models in ./example/python. TurboNLP/Translate-Demo shows a demo of applying TurboTransformer in Translation Task. Since the user of BERT acceleration always requires a customized post-processing process for the task, we provide an example of how to write a sequence classification application.

C++ APIs

Refer to ./example/cpp for an example. Our example provides the GPU and two CPU multi-thread calling methods. One is to do one BERT inference using multiple threads; the other is to do multiple BERT inference, each of which using one thread. Users can link turbo-transformers to your code through add_subdirectory.

Smart Batching (Minimize Zero-Padding Overhead in Batching)

Usually, feeding a batch of requests of different lengths into a bert model for inference, zero-padding is required to make all the requests have the same length. For example, serving requests list of lengths (100, 10, 50), you need a preprocessing stage to pad them as lengths (100, 100, 100). In this way, 90% and 50% of the last two sequence's computation are wasted. As indicated in Effective Transformer, it is not necessary to pad the input tensors. As an alternative, you just have to pad the batch-gemm operations inside multi-headed attentions, which accouts to a small propation of the entire BERT computation. Therefore most of gemm operations are processed without zero-padding. Turbo provides a model as BertModelSmartBatch including a smart batching technique. The example is presented in ./example/python/bert_smart_pad.py.

How to contribute new models

How to know hotspots of your code?

How to add a new layer?

TODO

Currently (June 2020), In the near future, we will add support for low-precision models (CPU int8, GPU FP16). Looking forwards to your contribution!

License

BSD 3-Clause License

Known Issues

  1. The results of Turbo Transformers may be different from the results of PyTorch after 2 digits behind the decimal point. The diff mainly comes from Bert Output Layer. We use an approximate GELU algorithm, which may be different from PyTorch.
  2. Turbo and PyTorch share the same MKL. MKL of PyTorch 1.5.0 may slow in Turbo. Reasons need to be determined. Download PyTorch version to 1.1.0 will improve Turbo's Performance.
  3. onnxruntime-cpu==1.4.0 and onnxruntime-gpu==1.3.0 can not work simultaneously.

History

  1. Janurary 2021 v0.6.0, TurboTransformers supports smart batching.
  2. July 2020 v0.4.0, TurboTransformers used onnxruntime as cpu backend, supports GPT2. Anded a Quantized BERT.
  3. July 2020 v0.3.1, TurboTransformers added support for ALbert, Roberta on CPU/GPU.
  4. June 2020 v0.3.0, TurboTransformers added support for Transformer Decoder on CPU/GPU.
  5. June 2020 v0.2.1, TurboTransformers added BLIS as a BLAS provider option. Better performance on AMD CPU.
  6. April 2020 v0.0.1, TurboTransformers released, and achieved state-of-the-art BERT inference speed on CPU/GPU.

Cite us

Cite this paper, if you use TurboTransformers in your research publication.

@inproceedings{fang2021turbotransformers,
  title={TurboTransformers: an efficient GPU serving system for transformer models},
  author={Fang, Jiarui and Yu, Yang and Zhao, Chengduo and Zhou, Jie},
  booktitle={Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},
  pages={389--402},
  year={2021}
}

The artifacts of the paper can be found at branch ppopp21_artifact_centos.

Contact us

Although we recommend you post your problem with github issues, you can also join in our Turbo user group.

  1. Scan this QR code and add our contactor as your WeChat friend.
  2. QQ Group, Name: TurboTransformers, Number : 1109315167.

More Repositories

1

weui

A UI library by WeChat official design team, includes the most useful widgets/modules in mobile web applications.
Less
27,053
star
2

wepy

小程序组件化开发框架
JavaScript
22,396
star
3

ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform
C++
19,310
star
4

mars

Mars is a cross-platform network component developed by WeChat.
C++
17,137
star
5

tinker

Tinker is a hot-fix solution library for Android, it supports dex, library and resources update without reinstall apk.
Java
17,056
star
6

MMKV

An efficient, small mobile key-value storage framework developed by WeChat. Works on Android, iOS, macOS, Windows, and POSIX.
C++
16,913
star
7

APIJSON

🏆 零代码、全功能、强安全 ORM 库 🚀 后端接口和文档零代码,前端(客户端) 定制返回 JSON 的数据和结构。 🏆 A JSON Transmission Protocol and an ORM Library 🚀 provides APIs and Docs without writing any code.
Java
16,681
star
8

vConsole

A lightweight, extendable front-end developer tool for mobile web page.
TypeScript
16,379
star
9

weui-wxss

A UI library by WeChat official design team, includes the most useful widgets/modules.
Less
14,966
star
10

QMUI_Android

提高 Android UI 开发效率的 UI 库
Java
14,336
star
11

rapidjson

A fast JSON parser/generator for C++ with both SAX/DOM style API
C++
13,803
star
12

secguide

面向开发人员梳理的代码安全指南
13,093
star
13

omi

Web Components Framework - Web组件框架
TypeScript
12,926
star
14

VasSonic

VasSonic is a lightweight and high-performance Hybrid framework developed by tencent VAS team, which is intended to speed up the first screen of websites working on Android and iOS platform.
Java
11,779
star
15

matrix

Matrix is a plugin style, non-invasive APM system developed by WeChat.
Java
11,417
star
16

wcdb

WCDB is a cross-platform database framework developed by WeChat.
C
10,509
star
17

xLua

xLua is a lua programming solution for C# ( Unity, .Net, Mono) , it supports android, ios, windows, linux, osx, etc.
C
9,133
star
18

libco

libco is a coroutine library which is widely used in wechat back-end service. It has been running on tens of thousands of machines since 2013.
C++
7,998
star
19

Hippy

Hippy is designed to easily build cross-platform dynamic apps. 👏
C++
7,840
star
20

Shadow

零反射全动态Android插件框架
Java
7,316
star
21

QMUI_iOS

QMUI iOS——致力于提高项目 UI 开发效率的解决方案
Objective-C
7,030
star
22

MLeaksFinder

Find memory leaks in your iOS app at develop time.
Objective-C
5,399
star
23

lemon-cleaner

腾讯柠檬清理是针对macOS系统专属制定的清理工具。主要功能包括重复文件和相似照片的识别、软件的定制化垃圾扫描、可视化的全盘空间分析、内存释放、浏览器隐私清理以及设备实时状态的监控等。重点聚焦清理功能,对上百款软件提供定制化的清理方案,提供专业的清理建议,帮助用户轻松完成一键式清理。
Objective-C
5,188
star
24

kbone

一个致力于微信小程序和 Web 端同构的解决方案
JavaScript
4,742
star
25

libpag

The official rendering library for PAG (Portable Animated Graphics) files that renders After Effects animations natively across multiple platforms.
C++
4,729
star
26

puerts

PUER(普洱) Typescript. Let's write your game in UE or Unity with TypeScript.
C++
4,661
star
27

GT

GT (Great Tit) is a portable debugging tool for bug hunting and performance tuning on smartphones anytime and anywhere just as listening music with Walkman. GT can act as the Integrated Debug Environment by directly running on smartphones.
Java
4,385
star
28

TNN

TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
C++
4,297
star
29

westore

小程序项目分层架构
JavaScript
4,216
star
30

tmagic-editor

TypeScript
4,037
star
31

wujie

极致的微前端框架
TypeScript
3,801
star
32

vap

VAP是企鹅电竞开发,用于播放特效动画的实现方案。具有高压缩率、硬件解码等优点。同时支持 iOS,Android,Web 平台。
Objective-C
3,794
star
33

phxpaxos

The Paxos library implemented in C++ that has been used in the WeChat production environment.
C++
3,301
star
34

WeFlow

A web developer workflow tool by WeChat team based on tmt-workflow, with cross-platform supported and environment ready.
JavaScript
3,224
star
35

cherry-markdown

✨ A Markdown Editor
JavaScript
3,195
star
36

weui.js

A lightweight javascript library for WeUI.
JavaScript
3,157
star
37

spring-cloud-tencent

Spring Cloud Tencent is a Spring Cloud based Service Governance Framework provided by Tencent.
Java
3,116
star
38

tencent-ml-images

Largest multi-label image database; ResNet-101 model; 80.73% top-1 acc on ImageNet
Python
3,046
star
39

tdesign

Enterprise Design System
Vue
3,010
star
40

VasDolly

Android V1 and V2 Signature Channel Package Plugin
Java
2,999
star
41

FaceDetection-DSFD

腾讯优图高精度双分支人脸检测器
Python
2,863
star
42

PhoenixGo

Go AI program which implements the AlphaGo Zero paper
C++
2,863
star
43

Tendis

Tendis is a high-performance distributed storage system fully compatible with the Redis protocol.
C++
2,837
star
44

behaviac

behaviac is a framework of the game AI development, and it also can be used as a rapid game prototype design tool. behaviac supports the behavior tree, finite state machine and hierarchical task network(BT, FSM, HTN)
C#
2,784
star
45

PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
Python
2,782
star
46

MSEC

Mass Service Engine in Cluster(MSEC) is opened source by QQ team from Tencent. It is a backend DEV &OPS engine, including RPC,name finding,load balance,monitoring,release and capacity management.
Java
2,745
star
47

phxsql

A high availability MySQL cluster that guarantees data consistency between a master and slaves.
C++
2,463
star
48

OOMDetector

OOMDetector is a memory monitoring component for iOS which provides you with OOM monitoring, memory allocation monitoring, memory leak detection and other functions.
Objective-C++
2,298
star
49

tsf

coroutine and Swoole based php server framework in tencent
PHP
2,179
star
50

tmt-workflow

A web developer workflow used by WeChat team based on Gulp, with cross-platform supported and solutions prepared.
CSS
2,175
star
51

Hardcoder

Hardcoder is a solution which allows Android APP and Android System to communicate with each other directly, solving the problem that Android APP could only use system standard API rather than the hardware resource of system.
C++
2,145
star
52

LKImageKit

A high-performance image framework, including a series of capabilities such as image views, image downloader, memory caches, disk caches, image decoders and image processors.
Objective-C
2,079
star
53

UnLua

A feature-rich, easy-learning and highly optimized Lua scripting plugin for UE.
C++
2,053
star
54

TubeMQ

TubeMQ has been donated to the Apache Software Foundation and renamed to InLong, please visit the new Apache repository: https://github.com/apache/incubator-inlong
2,027
star
55

ObjectDetection-OneStageDet

单阶段通用目标检测器
Python
1,962
star
56

cloudbase-framework

腾讯云开发云原生一体化部署工具 🚀 CloudBase Framework:一键部署,不限框架语言,云端一体化开发,基于Serverless 架构。A front-end and back-end integrated deployment tool. One-click deploy to serverless architecture. https://docs.cloudbase.net/framework/index
JavaScript
1,936
star
57

InjectFix

InjectFix is a hot-fix solution library for Unity
C#
1,933
star
58

TscanCode

A static code analyzer for C++, C#, Lua
C++
1,916
star
59

phxrpc

A simple C++ based RPC framework.
C++
1,905
star
60

soter

A secure and quick biometric authentication standard and platform in Android held by Tencent.
Java
1,902
star
61

phxqueue

A high-availability, high-throughput and highly reliable distributed queue based on the Paxos algorithm.
C++
1,891
star
62

plato

腾讯高性能分布式图计算框架Plato
C++
1,889
star
63

GameAISDK

基于图像的游戏AI自动化框架
C++
1,861
star
64

MedicalNet

Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project provides a series of 3D-ResNet pre-trained models and relative code.
Python
1,837
star
65

TSW

Tencent Server Web
TypeScript
1,802
star
66

NeuralNLP-NeuralClassifier

An Open-source Neural Hierarchical Multi-label Text Classification Toolkit
Python
1,781
star
67

QMUI_Web

An efficient front-end framework for developers building UI on the web.
JavaScript
1,719
star
68

Biny

Biny is a tiny, high-performance PHP framework for web applications
PHP
1,690
star
69

sluaunreal

lua dev plugin for unreal engine 4 or 5
C++
1,687
star
70

paxosstore

PaxosStore has been deployed in WeChat production for more than two years, providing storage services for the core businesses of WeChat backend. Now PaxosStore is running on thousands of machines, and is able to afford billions of peak TPS.
C++
1,658
star
71

Metis

Metis is a learnware platform in the field of AIOps.
Python
1,644
star
72

CodeAnalysis

Static Code Analysis - 静态代码分析
Python
1,585
star
73

TencentOS-kernel

腾讯针对云的场景研发的服务器操作系统
1,401
star
74

nohost

基于 Whistle 实现的多账号多环境远程配置及抓包调试平台
JavaScript
1,392
star
75

TBase

TBase is an enterprise-level distributed HTAP database. Through a single database cluster to provide users with highly consistent distributed database services and high-performance data warehouse services, a set of integrated enterprise-level solutions is formed.
C
1,372
star
76

WeDemo

WeDemo为微信团队开源项目,用于帮助微信开发者完成微信登录、微信分享等功能的接入和开发。开发者可参考源代码完成开发,也可以直接将代码应用到自己的App开发中,安全、便捷地在App中实现微信分享、微信登录功能。
Objective-C
1,365
star
77

feflow

🚀 A command line tool aims to improve front-end engineer workflow and standard, powered by TypeScript.
TypeScript
1,354
star
78

GAutomator

Automation for mobile games
Objective-C
1,318
star
79

tdesign-vue-next

A Vue3.x UI components lib for TDesign.
TypeScript
1,316
star
80

flare

Flare是广泛投产于腾讯广告后台的现代化C++开发框架,包含了基础库、RPC、各种客户端等。主要特点为易用性强、长尾延迟低。
C++
1,264
star
81

TFace

A trusty face analysis research platform developed by Tencent Youtu Lab
Python
1,236
star
82

LuaPanda

lua debug and code tools for VS Code
Lua
1,219
star
83

FeatherCNN

FeatherCNN is a high performance inference engine for convolutional neural networks.
C++
1,209
star
84

tdesign-miniprogram

A Wechat MiniProgram UI components lib for TDesign.
HTML
1,084
star
85

tgfx

A lightweight 2D graphics library for rendering texts, geometries, and images with high-performance APIs that work across various platforms.
C++
1,011
star
86

TencentPretrain

Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Python
985
star
87

RapidView

RapidView is an android ui and lightapp development framework
Java
977
star
88

FAutoTest

A UI automated testing framework for H5 and applets
Python
930
star
89

TencentKona-8

Tencent Kona is a no-cost, production-ready distribution of the Open Java Development Kit (OpenJDK), Long-term support(LTS) with quarterly updates. Tencent Kona serves as the default JDK internally at Tencent Cloud for cloud computing and other Java applications.
Java
909
star
90

tquic

A high-performance, lightweight, and cross-platform QUIC library
Rust
900
star
91

hel

A module federation SDK which is unrelated to tool chain for module consumer. 工具链无关的运行时模块联邦sdk.
JavaScript
888
star
92

tdesign-vue

A Vue.js UI components lib for TDesign.
TypeScript
872
star
93

Pebble

Pebble分布式开发框架
C++
861
star
94

mxflutter

使用 TypeScript/JavaScript 来开发 Flutter 应用的框架。
Dart
834
star
95

Face2FaceTranslator

面对面翻译小程序是微信团队针对面对面沟通的场景开发的流式语音翻译小程序,通过微信同声传译插件提供了语音识别,文本翻译等功能。
JavaScript
822
star
96

tdesign-react

A React UI components lib for TDesign.
TypeScript
787
star
97

LightDiffusionFlow

This extension is developed for AUTOMATIC1111's Stable Diffusion web UI that provides import/export options for parameters.
JavaScript
764
star
98

Real-SR

Real-World Super-Resolution via Kernel Estimation and Noise Injection
Python
753
star
99

DCache

A distributed in-memory NOSQL system based on TARS framework, support LRU algorithm and data persists on back-end database. Users can easily deploy, publish, and scale services on the web interface.
C++
746
star
100

PatrickStar

PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
Python
741
star