• Stars
    star
    30
  • Rank 839,658 (Top 17 %)
  • Language
    Go
  • Created over 3 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

⏱ Benchmarks of machine learning inference for Go

Go Machine Learning Benchmarks

Given a raw data in a Go service, how quickly can I get machine learning inference for it?

Typically, Go is dealing with structured single sample data. Thus, we are focusing on tabular machine learning models only, such as popular XGBoost. It is common to run Go service in a backed form and on Linux platform, thus we do not consider other deployment options. In the work bellow, we compare typical implementations on how this inference task can be performed.

diagram

host: AWS EC2 t2.xlarge shared
os: Ubuntu 20.04 LTS 
goos: linux
goarch: amd64
cpu: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
BenchmarkXGB_Go_GoFeatureProcessing_GoLeaves_noalloc                              491 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_GoLeaves                                      575 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_UDS_RawBytes_Python_XGB                    243056 ns/op
BenchmarkXGB_CGo_GoFeatureProcessing_XGB                                       244941 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_UDS_gRPC_CPP_XGB                           367433 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_UDS_gRPC_Python_XGB                        785147 ns/op
BenchmarkXGB_Go_UDS_gRPC_Python_sklearn_XGB                                  21699830 ns/op
BenchmarkXGB_Go_HTTP_JSON_Python_Gunicorn_Flask_sklearn_XGB                  21935237 ns/op

Abbreviations and Frameworks

Dataset and Model

We are using classic Titanic dataset. It contains numerical and categorical features, which makes it a representative of typical case. Data and notebooks to train model and preprocessor is available in /data and /notebooks.

Some numbers for reference

How fast do you need to get?

                   200ps - 4.6GHz single cycle time
                1ns      - L1 cache latency
               10ns      - L2/L3 cache SRAM latency
               20ns      - DDR4 CAS, first byte from memory latency
               20ns      - C++ raw hardcoded structs access
               80ns      - C++ FlatBuffers decode/traverse/dealloc
              150ns      - PCIe bus latency
              171ns      - cgo call boundary, 2015
              200ns      - HFT FPGA
              475ns      - 2020 MLPerf winner recommendation inference time per sample
 ---------->  500ns      - go-featureprocessing + leaves
              800ns      - Go Protocol Buffers Marshal
              837ns      - Go json-iterator/go json unmarshal
           1µs           - Go protocol buffers unmarshal
           3µs           - Go JSON Marshal
           7µs           - Go JSON Unmarshal
          10µs           - PCIe/NVLink startup time
          17µs           - Python JSON encode/decode times
          30µs           - UNIX domain socket; eventfd; fifo pipes
         100µs           - Redis intrinsic latency; KDB+; HFT direct market access
         200µs           - 1GB/s network air latency; Go garbage collector pauses interval 2018
         230µs           - San Francisco to San Jose at speed of light
         500µs           - NGINX/Kong added latency
     10ms                - AWS DynamoDB; WIFI6 "air" latency
     15ms                - AWS Sagemaker latency; "Flash Boys" 300million USD HFT drama
     30ms                - 5G "air" latency
     36ms                - San Francisco to Hong-Kong at speed of light
    100ms                - typical roundtrip from mobile to backend
    200ms                - AWS RDS MySQL/PostgreSQL; AWS Aurora
 10s                     - AWS Cloudfront 1MB transfer time

Profiling and Analysis

[491ns/575ns] Leaves — we see that most of time taken in Leaves Random Forest code. Leaves code does not have mallocs. Inplace preprocessing does not have mallocs, with non-inplace version malloc happen and takes and takes half of time of preprocessing. leaves

[243µs] UDS Raw bytes Python — we see that Python takes much longer time than preprocessing in Go, however Go is at least visible on the chart. We also note that Python spends most of the time in libgomp.so call, this library is in GNU OpenMP written in C which does parallel operations.

uds

[244µs] CGo version — similarly, we see that call to libgomp.so is being done. It is much smaller compare to rest of o CGo code, as compared to Python version above. Over overall results are not better then? Likely this is due to performance degradation from Go to CGo. We also note that malloc is done.

cgo

[367µs] gRPC over UDS to C++ — we see that Go code is around 50% of C++ version. In C++ 50% of time spend on gRPC code. Lastly, C++ also uses libgomp.so. We don't see on this chart, but likely Go code also spends considerable time on gRPC code.

cgo

[785µs] gRPC over UDS to Python wihout sklearn — we see that Go code is visible in the chart. Python spends only portion on time in libgomp.so.

cgo

[21ms] gRPC over UDS to Python with sklearn — we see that Go code (main.test) is no longer visible the chart. Python spends only small fraction of time on libgomp.so.

cgo

[22ms] REST service version with sklearn — similarly, we see that Go code (main.test) is no longer visible in the chart. Python spends more time in libgomp.so as compared to Python + gRPC + skelarn version, however it is not clear why results are worse.

cgo

Future work

  • go-featureprocessing - gRPCFlatBuffers - C++ - XGB
  • batch mode
  • UDS - gRPC - C++ - ONNX (sklearn + XGBoost)
  • UDS - gRPC - Python - ONNX (sklearn + XGBoost)
  • cgo ONNX (sklearn + XGBoost) (examples: 1)
  • native Go ONNX (sklearn + XGBoost) — no official support, https://github.com/owulveryck/onnx-go is not complete
  • text
  • images
  • videos

Reference

More Repositories

1

go-recipes

🦩 Tools for Go projects
Go
4,036
star
2

go-cover-treemap

🎄 Go code coverage to SVG treemap
Go
484
star
3

go-binsize-treemap

🔍 Go binary size SVG treemap
Go
444
star
4

calendarheatmap

📅 Calendar heatmap inspired by GitHub contribution activity
Go
393
star
5

llama2.go

LLaMA-2 in native Go
Go
187
star
6

go-instrument

⚡️ Automatically add Trace Spans to Go methods and functions
Go
163
star
7

treemap

🍬 Pretty Treemaps
Go
149
star
8

go-featureprocessing

🔥 Fast, simple sklearn-like feature processing for Go
Go
116
star
9

go-hackers-delight

"Hacker's Delight" in Go
Go
85
star
10

go-graph-layout

🔮 Graph Layout Algorithms in Go
Go
85
star
11

go-cover-treemap-web

Go
78
star
12

jsonl-graph

🏝 JSONL Graph Tools
Go
72
star
13

import-graph

Collect data about your dependencies
Go
40
star
14

twitter-remover

Remove twitter likes, posts, retweets, replies, followers
38
star
15

watchhttp

🌺 Run command periodically and expose latest STDOUT as HTTP endpoint
Go
32
star
16

fpdecimal

🛫 Fixed-Point Decimals
Go
29
star
17

fpmoney

🧧 Fixed-Point Decimal Money
Go
25
star
18

validate

🥬 validate. simply.
Go
19
star
19

hq

🐁 happy little queue
Go
16
star
20

smrcptr

detect mixing pointer and value method receivers
Go
13
star
21

neuroscience-landscape

🌌 Resources on Neuroscience
12
star
22

vertfn

Go linter for Vertical Function Ordering
Go
12
star
23

go-enum-encoding

Generate Go enum encoding
Go
12
star
24

go-enum-example

Go Enum: benchmarks, examples, analysis
Go
8
star
25

htmljson

🫐 Rich rendering of JSON as HTML in Go
Go
7
star
26

multiline-jsonl

Read and write multiline JSONL in Go
Go
6
star
27

openapi-inline-examples

🌏 Inline OpenAPI JSON examples from filenames
Go
6
star
28

htmlyaml

🐹 render YAML as HTML in Go
Go
4
star
29

go-commentage

How far Go comments drifting behind?
Go
4
star
30

rchan

Go channel through Redis List
Go
4
star
31

mini-awesome-cv

📝 LaTeX Awesome-CV under 200LOC
TeX
4
star
32

go-instrument-example

Go
3
star
33

svgpan

Pan and Zoom of SVG in your Go front-end app in browser.
Go
3
star
34

go-bench-errors

Benchmarking Go errors
Go
3
star
35

mdpage

one-pager Markdown list from YAML
Go
3
star
36

consistentimports

Detect inconsistent import aliases
Go
2
star
37

go-callsite-stats

analyse function callsites
Go
2
star
38

read-seek-peeker

Go Reader that can Seek() and Peek()
Go
2
star
39

go-bench-stream

🌊 Go Benchmarks for Stream Processing
Go
2
star
40

go-bitset-example

Go Bitset: benchmarks, examples, analysis
Go
1
star
41

aws-s3-reader

Efficient Go Reader for large AWS S3 Objects
Go
1
star
42

go-bench-receiver

Which is more efficient value or pointer method receivers in Go?
Go
1
star
43

totp

TOTP (RFC-6238) and HOTP (RFC-4226)
Go
1
star
44

dotfiles

macOS, Linux
Vim Script
1
star
45

presentations

1
star
46

PINTOS

C
1
star
47

ARIA

C++
1
star
48

mini-blog

JavaScript
1
star
49

graph-tools-gallery

Inspiration gallery of graph tools
1
star