• Stars
    star
    146
  • Rank 251,250 (Top 5 %)
  • Language
    Rust
  • License
    MIT License
  • Created over 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Orkhon: ML Inference Framework and Server Runtime


Orkhon: ML Inference Framework and Server Runtime

Latest Release Crates.io
License Crates.io
Build Status Build Status
Downloads Crates.io
Gitter

What is it?

Orkhon is Rust framework for Machine Learning to run/use inference/prediction code written in Python, frozen models and process unseen data. It is mainly focused on serving models and processing unseen data in a performant manner. Instead of using Python directly and having scalability problems for servers this framework tries to solve them with built-in async API.

Main features

  • Sync & Async API for models.
  • Easily embeddable engine for well-known Rust web frameworks.
  • API contract for interacting with Python code.
  • High processing throughput
    • ~4.8361 GiB/s prediction throughput
    • 3_000 concurrent requests takes ~4ms on average
  • Python Module caching

Installation

You can include Orkhon into your project with;

[dependencies]
orkhon = "0.2"

Dependencies

You will need:

  • If you use pymodel feature, Python dev dependencies should be installed and have proper python runtime to use Orkhon with your project.
  • If you want to have tensorflow inference. Installing tensorflow as library for linking is required.
  • ONNX interface doesn't need extra dependencies from the system side.
  • Point out your PYTHONHOME environment variable to your Python installation.

Python API contract

For Python API contract you can take a look at the Project Documentation.

Examples

Request a Tensorflow prediction asynchronously

 use orkhon::prelude::*;
 use orkhon::tcore::prelude::*;
 use orkhon::ttensor::prelude::*;
 use rand::*;
 use std::path::PathBuf;

let o = Orkhon::new()
    .config(
        OrkhonConfig::new()
            .with_input_fact_shape(InferenceFact::dt_shape(f32::datum_type(), tvec![10, 100])),
    )
    .tensorflow(
        "model_which_will_be_tested",
        PathBuf::from("tests/protobuf/manual_input_infer/my_model.pb"),
    )
    .shareable();

let mut rng = thread_rng();
let vals: Vec<_> = (0..1000).map(|_| rng.gen::<f32>()).collect();
let input = tract_ndarray::arr1(&vals).into_shape((10, 100)).unwrap();

let o = o.get();
let handle = async move {
    let processor = o.tensorflow_request_async(
       "model_which_will_be_tested",
       ORequest::with_body(TFRequest::new().body(input.into())),
    );
    processor.await
};
let resp = block_on(handle).unwrap();

Request an ONNX prediction synchronously

This example needs onnxmodel feature enabled.

use orkhon::prelude::*;
use orkhon::tcore::prelude::*;
use orkhon::ttensor::prelude::*;
use rand::*;
use std::path::PathBuf;

 let o = Orkhon::new()
     .config(
         OrkhonConfig::new()
             .with_input_fact_shape(InferenceFact::dt_shape(f32::datum_type(), tvec![10, 100])),
     )
     .onnx(
         "model_which_will_be_tested",
         PathBuf::from("tests/protobuf/onnx_model/example.onnx"),
     )
     .build();

 let mut rng = thread_rng();
 let vals: Vec<_> = (0..1000).map(|_| rng.gen::<f32>()).collect();
 let input = tract_ndarray::arr1(&vals).into_shape((10, 100)).unwrap();

 let resp = o
     .onnx_request(
         "model_which_will_be_tested",
         ORequest::with_body(ONNXRequest::new().body(input.into())),
     )
     .unwrap();
 assert_eq!(resp.body.output.len(), 1);

License

License is MIT

Documentation

Official documentation is hosted on docs.rs.

Getting Help

Please head to our Gitter or use StackOverflow

Discussion and Development

We use Gitter for development discussions. Also please don't hesitate to open issues on GitHub ask for features, report bugs, comment on design and more! More interaction and more ideas are better!

Contributing to Orkhon Open Source Helpers

All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.

A detailed overview on how to contribute can be found in the CONTRIBUTING guide on GitHub.

More Repositories

1

tokamak

Fusion Reactor for Rust - Atom Rust IDE
HTML
400
star
2

nuclei

Proactive IO & Runtime system
Rust
247
star
3

lever

Pillars for Transactional Systems and Data Grids
Rust
130
star
4

callysto

Stream processing & Service framework.
Rust
112
star
5

kaos

Chaotic Testing Harness
Rust
58
star
6

korq

Kubernetes Dynamic Log Tailing Utility
Rust
42
star
7

cuneiform

Cache & In-Memory optimizations for Rust, revived from the slabs of Sumer.
Rust
30
star
8

proq

Idiomatic Async Prometheus Query (PromQL) Client for Rust.
Rust
20
star
9

rust-dersleri

๐Ÿฆ€Rust Turkiye - Rust Dersleri
Rust
14
star
10

degauss

Avro schema compatibility checker
Rust
14
star
11

tokamak-terminal

Tokamak Terminal
CoffeeScript
11
star
12

smbclient-sys

SMB client(SAMBA impl) FFI bindings [libsmbclient] for Rust
Rust
11
star
13

ringbahn

High performance multiple backend web server
Elixir
7
star
14

cuneiform-fields

Field level cache optimizations for Rust (no_std)
Rust
7
star
15

luaenv

Custom Lua Execution Environments for use with Koneki LDT
Lua
6
star
16

openvas_tasks

Information and history receptor for OpenVAS tasks.
Python
6
star
17

qube

Kubernetes API client with async features (based on kubeclient)
Rust
6
star
18

tracing-json

Tracing Structured Json Logging Adapter
Rust
5
star
19

trafo

Rust rewrite of the util-linux
Rust
4
star
20

kanagawa

Web framework based on Tide with Proactive IO
Rust
4
star
21

olayufku

Schema Registry for BigQuery
JavaScript
2
star
22

relay

Hot-updatable reverse proxy
Rust
2
star
23

airvantage-api-nodejs

Airvantage API Node.JS example
CSS
2
star
24

zor

Properly run Rust benchmarks
Shell
2
star
25

docker-volume-redis

Redis based volume plugin for Docker
Go
2
star
26

travertine

SNMP browser for all platforms (SNMPB fork)
C
2
star
27

leo

Leo - Heuristic Variant Analysis Tool written in Rust
1
star
28

RubMat

discrete math repl
Ruby
1
star
29

cekic

Experimental system model generator for RTS
Java
1
star
30

avu

Kaminsky DNS cache poisoning tool
C++
1
star
31

dark_model

Time travel MDP for Dark Netflix Series (includes protagonists' Season 2 jumps)
1
star
32

volvox

All-in-One developer modes for Emacs
Emacs Lisp
1
star
33

radiant-turkish_language_pack-extension

Radiant CMS Turkish Language Pack Extension
Ruby
1
star
34

x86instexporter

Intel x86/x86_64 instruction set exporter (to ease the pain of rust-lang/stdarch contributions)
Python
1
star
35

async-process-bench

Rust
1
star
36

turnusol

Easy to use framework for SBCs.(Single-Board Computers)
Erlang
1
star
37

latch

Latch iz eksperimental 1k/4k kompilation syztem
Rust
1
star
38

nuclei-attributes

Attributes for Nuclei
Rust
1
star