• Stars
    star
    29
  • Rank 831,083 (Top 17 %)
  • Language
    Go
  • Created about 3 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

⏱ Benchmarks of machine learning inference for Go

Go Machine Learning Benchmarks

Given a raw data in a Go service, how quickly can I get machine learning inference for it?

Typically, Go is dealing with structured single sample data. Thus, we are focusing on tabular machine learning models only, such as popular XGBoost. It is common to run Go service in a backed form and on Linux platform, thus we do not consider other deployment options. In the work bellow, we compare typical implementations on how this inference task can be performed.

diagram

host: AWS EC2 t2.xlarge shared
os: Ubuntu 20.04 LTS 
goos: linux
goarch: amd64
cpu: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
BenchmarkXGB_Go_GoFeatureProcessing_GoLeaves_noalloc                              491 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_GoLeaves                                      575 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_UDS_RawBytes_Python_XGB                    243056 ns/op
BenchmarkXGB_CGo_GoFeatureProcessing_XGB                                       244941 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_UDS_gRPC_CPP_XGB                           367433 ns/op
BenchmarkXGB_Go_GoFeatureProcessing_UDS_gRPC_Python_XGB                        785147 ns/op
BenchmarkXGB_Go_UDS_gRPC_Python_sklearn_XGB                                  21699830 ns/op
BenchmarkXGB_Go_HTTP_JSON_Python_Gunicorn_Flask_sklearn_XGB                  21935237 ns/op

Abbreviations and Frameworks

Dataset and Model

We are using classic Titanic dataset. It contains numerical and categorical features, which makes it a representative of typical case. Data and notebooks to train model and preprocessor is available in /data and /notebooks.

Some numbers for reference

How fast do you need to get?

                   200ps - 4.6GHz single cycle time
                1ns      - L1 cache latency
               10ns      - L2/L3 cache SRAM latency
               20ns      - DDR4 CAS, first byte from memory latency
               20ns      - C++ raw hardcoded structs access
               80ns      - C++ FlatBuffers decode/traverse/dealloc
              150ns      - PCIe bus latency
              171ns      - cgo call boundary, 2015
              200ns      - HFT FPGA
              475ns      - 2020 MLPerf winner recommendation inference time per sample
 ---------->  500ns      - go-featureprocessing + leaves
              800ns      - Go Protocol Buffers Marshal
              837ns      - Go json-iterator/go json unmarshal
           1µs           - Go protocol buffers unmarshal
           3µs           - Go JSON Marshal
           7µs           - Go JSON Unmarshal
          10µs           - PCIe/NVLink startup time
          17µs           - Python JSON encode/decode times
          30µs           - UNIX domain socket; eventfd; fifo pipes
         100µs           - Redis intrinsic latency; KDB+; HFT direct market access
         200µs           - 1GB/s network air latency; Go garbage collector pauses interval 2018
         230µs           - San Francisco to San Jose at speed of light
         500µs           - NGINX/Kong added latency
     10ms                - AWS DynamoDB; WIFI6 "air" latency
     15ms                - AWS Sagemaker latency; "Flash Boys" 300million USD HFT drama
     30ms                - 5G "air" latency
     36ms                - San Francisco to Hong-Kong at speed of light
    100ms                - typical roundtrip from mobile to backend
    200ms                - AWS RDS MySQL/PostgreSQL; AWS Aurora
 10s                     - AWS Cloudfront 1MB transfer time

Profiling and Analysis

[491ns/575ns] Leaves — we see that most of time taken in Leaves Random Forest code. Leaves code does not have mallocs. Inplace preprocessing does not have mallocs, with non-inplace version malloc happen and takes and takes half of time of preprocessing. leaves

[243µs] UDS Raw bytes Python — we see that Python takes much longer time than preprocessing in Go, however Go is at least visible on the chart. We also note that Python spends most of the time in libgomp.so call, this library is in GNU OpenMP written in C which does parallel operations.

uds

[244µs] CGo version — similarly, we see that call to libgomp.so is being done. It is much smaller compare to rest of o CGo code, as compared to Python version above. Over overall results are not better then? Likely this is due to performance degradation from Go to CGo. We also note that malloc is done.

cgo

[367µs] gRPC over UDS to C++ — we see that Go code is around 50% of C++ version. In C++ 50% of time spend on gRPC code. Lastly, C++ also uses libgomp.so. We don't see on this chart, but likely Go code also spends considerable time on gRPC code.

cgo

[785µs] gRPC over UDS to Python wihout sklearn — we see that Go code is visible in the chart. Python spends only portion on time in libgomp.so.

cgo

[21ms] gRPC over UDS to Python with sklearn — we see that Go code (main.test) is no longer visible the chart. Python spends only small fraction of time on libgomp.so.

cgo

[22ms] REST service version with sklearn — similarly, we see that Go code (main.test) is no longer visible in the chart. Python spends more time in libgomp.so as compared to Python + gRPC + skelarn version, however it is not clear why results are worse.

cgo

Future work

  • go-featureprocessing - gRPCFlatBuffers - C++ - XGB
  • batch mode
  • UDS - gRPC - C++ - ONNX (sklearn + XGBoost)
  • UDS - gRPC - Python - ONNX (sklearn + XGBoost)
  • cgo ONNX (sklearn + XGBoost) (examples: 1)
  • native Go ONNX (sklearn + XGBoost) — no official support, https://github.com/owulveryck/onnx-go is not complete
  • text
  • images
  • videos

Reference

More Repositories

1

go-recipes

🦩 Tools for Go projects
Go
3,747
star
2

go-binsize-treemap

🔍 Go binary size SVG treemap
Go
441
star
3

calendarheatmap

📅 Calendar heatmap inspired by GitHub contribution activity
Go
384
star
4

go-cover-treemap

🎄 Go code coverage to SVG treemap
Go
258
star
5

llama2.go

LLaMA-2 in native Go
Go
166
star
6

treemap

🍬 Pretty Treemaps
Go
127
star
7

go-instrument

⚡️ Automatically add Trace Spans to Go methods and functions
Go
122
star
8

go-featureprocessing

🔥 Fast, simple sklearn-like feature processing for Go
Go
107
star
9

go-graph-layout

🔮 Graph Layout Algorithms in Go
Go
78
star
10

go-cover-treemap-web

Go
77
star
11

jsonl-graph

🏝 JSONL Graph Tools
Go
69
star
12

import-graph

🕵🏻‍♂️ Collect data about your dependencies
Go
37
star
13

watchhttp

🌺 Run command periodically and expose latest STDOUT as HTTP endpoint
Go
28
star
14

fpdecimal

🛫 Fixed-Point Decimals
Go
26
star
15

fpmoney

🧧 Fixed-Point Decimal Money
Go
23
star
16

validate

🥬 validate. simply.
Go
17
star
17

hq

🐁 happy little queue
Go
15
star
18

neuroscience-landscape

🌌 Resources on Neuroscience
12
star
19

vertfn

Go linter for Vertical Function Ordering
Go
11
star
20

smrcptr

🥞 detect mixing pointer and value method receivers
Go
10
star
21

multiline-jsonl

Read and write multiline JSONL in Go
Go
5
star
22

openapi-inline-examples

🌏 Inline OpenAPI JSON examples from filenames
Go
5
star
23

htmljson

🫐 Rich rendering of JSON as HTML in Go
Go
5
star
24

go-enum-example

Go Enum: benchmarks, examples, analysis
Go
4
star
25

htmlyaml

🐹 render YAML as HTML in Go
Go
3
star
26

rchan

rchan: Go channel through Redis List
Go
3
star
27

go-commentage

🐢 how far Go comments drifting behind
Go
2
star
28

svgpan

Pan and Zoom of SVG in your Go front-end app in browser.
Go
2
star
29

go-bench-errors

Benchmarking Go errors
Go
2
star
30

mini-awesome-cv

📝 LaTeX Awesome-CV under 200LOC
TeX
2
star
31

go-enum-encoding

Generate Go enum encoding
Go
2
star
32

consistentimports

Detect inconsistent import aliases
Go
1
star
33

go-instrument-example

Go
1
star
34

go-callsite-stats

analyse function callsites
Go
1
star
35

mdpage

🍙 CLI tool to generate one-page Markdown lists based on YAML
Go
1
star