• Stars
    star
    936
  • Rank 46,828 (Top 1.0 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Multi Model Server is a tool for serving neural net models for inference

Multi Model Server

ubuntu/python-2.7 ubuntu/python-3.6
Python3 Build Status Python2 Build Status

Multi Model Server (MMS) is a flexible and easy to use tool for serving deep learning models trained using any ML/DL framework.

Use the MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

A quick overview and examples for both serving and packaging are provided below. Detailed documentation and examples are provided in the docs folder.

Join our slack channel to get in touch with development team, ask questions, find out what's cooking and more!

Contents of this Document

Other Relevant Documents

Quick Start

Prerequisites

Before proceeding further with this document, make sure you have the following prerequisites.

  1. Ubuntu, CentOS, or macOS. Windows support is experimental. The following instructions will focus on Linux and macOS only.

  2. Python - Multi Model Server requires python to run the workers.

  3. pip - Pip is a python package management system.

  4. Java 8 - Multi Model Server requires Java 8 to start. You have the following options for installing Java 8:

    For Ubuntu:

    sudo apt-get install openjdk-8-jre-headless

    For CentOS:

    sudo yum install java-1.8.0-openjdk

    For macOS:

    brew tap homebrew/cask-versions
    brew update
    brew cask install adoptopenjdk8

Installing Multi Model Server with pip

Setup

Step 1: Setup a Virtual Environment

We recommend installing and running Multi Model Server in a virtual environment. It's a good practice to run and install all of the Python dependencies in virtual environments. This will provide isolation of the dependencies and ease dependency management.

One option is to use Virtualenv. This is used to create virtual Python environments. You may install and activate a virtualenv for Python 2.7 as follows:

pip install virtualenv

Then create a virtual environment:

# Assuming we want to run python2.7 in /usr/local/bin/python2.7
virtualenv -p /usr/local/bin/python2.7 /tmp/pyenv2
# Enter this virtual environment as follows
source /tmp/pyenv2/bin/activate

Refer to the Virtualenv documentation for further information.

Step 2: Install MXNet MMS won't install the MXNet engine by default. If it isn't already installed in your virtual environment, you must install one of the MXNet pip packages.

For CPU inference, mxnet-mkl is recommended. Install it as follows:

# Recommended for running Multi Model Server on CPU hosts
pip install mxnet-mkl

For GPU inference, mxnet-cu92mkl is recommended. Install it as follows:

# Recommended for running Multi Model Server on GPU hosts
pip install mxnet-cu92mkl

Step 3: Install or Upgrade MMS as follows:

# Install latest released version of multi-model-server 
pip install multi-model-server

To upgrade from a previous version of multi-model-server, please refer migration reference document.

Notes:

  • A minimal version of model-archiver will be installed with MMS as dependency. See model-archiver for more options and details.
  • See the advanced installation page for more options and troubleshooting.

Serve a Model

Once installed, you can get MMS model server up and running very quickly. Try out --help to see all the CLI options available.

multi-model-server --help

For this quick start, we'll skip over most of the features, but be sure to take a look at the full server docs when you're ready.

Here is an easy example for serving an object classification model:

multi-model-server --start --models squeezenet=https://s3.amazonaws.com/model-server/model_archive_1.0/squeezenet_v1.1.mar

With the command above executed, you have MMS running on your host, listening for inference requests. Please note, that if you specify model(s) during MMS start - it will automatically scale backend workers to the number equal to available vCPUs (if you run on CPU instance) or to the number of available GPUs (if you run on GPU instance). In case of powerful hosts with a lot of compute resoures (vCPUs or GPUs) this start up and autoscaling process might take considerable time. If you would like to minimize MMS start up time you can try to avoid registering and scaling up model during start up time and move that to a later point by using corresponding Management API calls (this allows finer grain control to how much resources are allocated for any particular model).

To test it out, you can open a new terminal window next to the one running MMS. Then you can use curl to download one of these cute pictures of a kitten and curl's -o flag will name it kitten.jpg for you. Then you will curl a POST to the MMS predict endpoint with the kitten's image.

kitten

In the example below, we provide a shortcut for these steps.

curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
curl -X POST http://127.0.0.1:8080/predictions/squeezenet -T kitten.jpg

The predict endpoint will return a prediction response in JSON. It will look something like the following result:

[
  {
    "probability": 0.8582232594490051,
    "class": "n02124075 Egyptian cat"
  },
  {
    "probability": 0.09159987419843674,
    "class": "n02123045 tabby, tabby cat"
  },
  {
    "probability": 0.0374876894056797,
    "class": "n02123159 tiger cat"
  },
  {
    "probability": 0.006165083032101393,
    "class": "n02128385 leopard, Panthera pardus"
  },
  {
    "probability": 0.0031716004014015198,
    "class": "n02127052 lynx, catamount"
  }
]

You will see this result in the response to your curl call to the predict endpoint, and in the server logs in the terminal window running MMS. It's also being logged locally with metrics.

Other models can be downloaded from the model zoo, so try out some of those as well.

Now you've seen how easy it can be to serve a deep learning model with MMS! Would you like to know more?

Stopping the running model server

To stop the current running model-server instance, run the following command:

$ multi-model-server --stop

You would see output specifying that multi-model-server has stopped.

Create a Model Archive

MMS enables you to package up all of your model artifacts into a single model archive. This makes it easy to share and deploy your models. To package a model, check out model archiver documentation

Recommended production deployments

  • MMS doesn't provide authentication. You have to have your own authentication proxy in front of MMS.
  • MMS doesn't provide throttling, it's vulnerable to DDoS attack. It's recommended to running MMS behind a firewall.
  • MMS only allows localhost access by default, see Network configuration for detail.
  • SSL is not enabled by default, see Enable SSL for detail.
  • MMS use a config.properties file to configure MMS's behavior, see Manage MMS page for detail of how to configure MMS.
  • For better security, we recommend running MMS inside docker container. This project includes Dockerfiles to build containers recommended for production deployments. These containers demonstrate how to customize your own production MMS deployment. The basic usage can be found on the Docker readme.

Other Features

Browse over to the Docs readme for the full index of documentation. This includes more examples, how to customize the API service, API endpoint details, and more.

External demos powered by MMS

Here are some example demos of deep learning applications, powered by MMS:

Product Review Classification demo4 Visual Search demo1
Facial Emotion Recognition demo2 Neural Style Transfer demo3

Contributing

We welcome all contributions!

To file a bug or request a feature, please file a GitHub issue. Pull requests are welcome.

More Repositories

1

git-secrets

Prevents you from committing secrets and credentials into git repositories
Shell
11,616
star
2

llrt

LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
JavaScript
7,555
star
3

aws-shell

An integrated shell for working with the AWS CLI.
Python
7,116
star
4

autogluon

AutoGluon: AutoML for Image, Text, and Tabular Data
Python
4,348
star
5

aws-cloudformation-templates

A collection of useful CloudFormation templates
Python
4,302
star
6

mountpoint-s3

A simple, high-throughput file client for mounting an Amazon S3 bucket as a local file system.
Rust
3,986
star
7

gluonts

Probabilistic time series modeling in Python
Python
3,686
star
8

deequ

Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Scala
2,871
star
9

aws-lambda-rust-runtime

A Rust runtime for AWS Lambda
Rust
2,829
star
10

aws-sdk-rust

AWS SDK for the Rust Programming Language
2,754
star
11

amazon-redshift-utils

Amazon Redshift Utils contains utilities, scripts and view which are useful in a Redshift environment
Python
2,643
star
12

diagram-maker

A library to display an interactive editor for any graph-like data.
TypeScript
2,359
star
13

amazon-ecr-credential-helper

Automatically gets credentials for Amazon ECR on docker push/docker pull
Go
2,261
star
14

amazon-eks-ami

Packer configuration for building a custom EKS AMI
Shell
2,164
star
15

aws-lambda-powertools-python

A developer toolkit to implement Serverless best practices and increase developer velocity.
Python
2,148
star
16

aws-well-architected-labs

Hands on labs and code to help you learn, measure, and build using architectural best practices.
Python
1,834
star
17

aws-config-rules

[Node, Python, Java] Repository of sample Custom Rules for AWS Config.
Python
1,473
star
18

smithy

Smithy is a protocol-agnostic interface definition language and set of tools for generating clients, servers, and documentation for any programming language.
Java
1,356
star
19

aws-support-tools

Tools and sample code provided by AWS Premium Support.
Python
1,290
star
20

open-data-registry

A registry of publicly available datasets on AWS
Python
1,199
star
21

sockeye

Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
Python
1,181
star
22

aws-lambda-powertools-typescript

Powertools is a developer toolkit to implement Serverless best practices and increase developer velocity.
TypeScript
1,179
star
23

dgl-ke

High performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings.
Python
1,144
star
24

aws-sdk-ios-samples

This repository has samples that demonstrate various aspects of the AWS SDK for iOS, you can get the SDK source on Github https://github.com/aws-amplify/aws-sdk-ios/
Swift
1,038
star
25

aws-sdk-android-samples

This repository has samples that demonstrate various aspects of the AWS SDK for Android, you can get the SDK source on Github https://github.com/aws-amplify/aws-sdk-android/
Java
1,018
star
26

aws-solutions-constructs

The AWS Solutions Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions
TypeScript
1,013
star
27

aws-cfn-template-flip

Tool for converting AWS CloudFormation templates between JSON and YAML formats.
Python
981
star
28

amazon-kinesis-video-streams-webrtc-sdk-c

Amazon Kinesis Video Streams Webrtc SDK is for developers to install and customize realtime communication between devices and enable secure streaming of video, audio to Kinesis Video Streams.
C
975
star
29

aws-lambda-go-api-proxy

lambda-go-api-proxy makes it easy to port APIs written with Go frameworks such as Gin (https://gin-gonic.github.io/gin/ ) to AWS Lambda and Amazon API Gateway.
Go
967
star
30

eks-node-viewer

EKS Node Viewer
Go
947
star
31

ec2-spot-labs

Collection of tools and code examples to demonstrate best practices in using Amazon EC2 Spot Instances.
Jupyter Notebook
905
star
32

aws-mobile-appsync-sdk-js

JavaScript library files for Offline, Sync, Sigv4. includes support for React Native
TypeScript
902
star
33

aws-saas-boost

AWS SaaS Boost is a ready-to-use toolset that removes the complexity of successfully running SaaS workloads in the AWS cloud.
Java
901
star
34

fargatecli

CLI for AWS Fargate
Go
891
star
35

aws-api-gateway-developer-portal

A Serverless Developer Portal for easily publishing and cataloging APIs
JavaScript
879
star
36

ecs-refarch-continuous-deployment

ECS Reference Architecture for creating a flexible and scalable deployment pipeline to Amazon ECS using AWS CodePipeline
Shell
842
star
37

fortuna

A Library for Uncertainty Quantification.
Python
836
star
38

dynamodb-data-mapper-js

A schema-based data mapper for Amazon DynamoDB.
TypeScript
818
star
39

goformation

GoFormation is a Go library for working with CloudFormation templates.
Go
812
star
40

flowgger

A fast data collector in Rust
Rust
796
star
41

aws-js-s3-explorer

AWS JavaScript S3 Explorer is a JavaScript application that uses AWS's JavaScript SDK and S3 APIs to make the contents of an S3 bucket easy to browse via a web browser.
HTML
771
star
42

aws-icons-for-plantuml

PlantUML sprites, macros, and other includes for Amazon Web Services services and resources
Python
737
star
43

aws-devops-essential

In few hours, quickly learn how to effectively leverage various AWS services to improve developer productivity and reduce the overall time to market for new product capabilities.
Shell
674
star
44

aws-apigateway-lambda-authorizer-blueprints

Blueprints and examples for Lambda-based custom Authorizers for use in API Gateway.
C#
660
star
45

amazon-ecs-nodejs-microservices

Reference architecture that shows how to take a Node.js application, containerize it, and deploy it as microservices on Amazon Elastic Container Service.
Shell
650
star
46

amazon-kinesis-client

Client library for Amazon Kinesis
Java
621
star
47

aws-deployment-framework

The AWS Deployment Framework (ADF) is an extensive and flexible framework to manage and deploy resources across multiple AWS accounts and regions based on AWS Organizations.
Python
617
star
48

aws-lambda-web-adapter

Run web applications on AWS Lambda
Rust
610
star
49

dgl-lifesci

Python package for graph neural networks in chemistry and biology
Python
594
star
50

aws-security-automation

Collection of scripts and resources for DevSecOps and Automated Incident Response Security
Python
585
star
51

aws-glue-libs

AWS Glue Libraries are additions and enhancements to Spark for ETL operations.
Python
565
star
52

python-deequ

Python API for Deequ
Python
535
star
53

aws-athena-query-federation

The Amazon Athena Query Federation SDK allows you to customize Amazon Athena with your own data sources and code.
Java
507
star
54

data-on-eks

DoEKS is a tool to build, deploy and scale Data Platforms on Amazon EKS
HCL
469
star
55

shuttle

Shuttle is a library for testing concurrent Rust code
Rust
465
star
56

ami-builder-packer

An example of an AMI Builder using CI/CD with AWS CodePipeline, AWS CodeBuild, Hashicorp Packer and Ansible.
465
star
57

route53-dynamic-dns-with-lambda

A Dynamic DNS system built with API Gateway, Lambda & Route 53.
Python
461
star
58

aws-servicebroker

AWS Service Broker
Python
461
star
59

amazon-ecs-local-container-endpoints

A container that provides local versions of the ECS Task Metadata Endpoint and ECS Task IAM Roles Endpoint.
Go
456
star
60

datawig

Imputation of missing values in tables.
JavaScript
454
star
61

aws-jwt-verify

JS library for verifying JWTs signed by Amazon Cognito, and any OIDC-compatible IDP that signs JWTs with RS256, RS384, and RS512
TypeScript
452
star
62

amazon-dynamodb-lock-client

The AmazonDynamoDBLockClient is a general purpose distributed locking library built on top of DynamoDB. It supports both coarse-grained and fine-grained locking.
Java
447
star
63

ecs-refarch-service-discovery

An EC2 Container Service Reference Architecture for providing Service Discovery to containers using CloudWatch Events, Lambda and Route 53 private hosted zones.
Go
444
star
64

ssosync

Populate AWS SSO directly with your G Suite users and groups using either a CLI or AWS Lambda
Go
443
star
65

handwritten-text-recognition-for-apache-mxnet

This repository lets you train neural networks models for performing end-to-end full-page handwriting recognition using the Apache MXNet deep learning frameworks on the IAM Dataset.
Jupyter Notebook
442
star
66

awscli-aliases

Repository for AWS CLI aliases.
437
star
67

aws-config-rdk

The AWS Config Rules Development Kit helps developers set up, author and test custom Config rules. It contains scripts to enable AWS Config, create a Config rule and test it with sample ConfigurationItems.
Python
436
star
68

snapchange

Lightweight fuzzing of a memory snapshot using KVM
Rust
427
star
69

aws-security-assessment-solution

An AWS tool to help you create a point in time assessment of your AWS account using Prowler and Scout as well as optional AWS developed ransomware checks.
423
star
70

lambda-refarch-mapreduce

This repo presents a reference architecture for running serverless MapReduce jobs. This has been implemented using AWS Lambda and Amazon S3.
JavaScript
422
star
71

aws-lambda-cpp

C++ implementation of the AWS Lambda runtime
C++
409
star
72

aws-cloudsaga

AWS CloudSaga - Simulate security events in AWS
Python
389
star
73

amazon-kinesis-producer

Amazon Kinesis Producer Library
C++
385
star
74

soci-snapshotter

Go
383
star
75

pgbouncer-fast-switchover

Adds query routing and rewriting extensions to pgbouncer
C
381
star
76

serverless-photo-recognition

A collection of 3 lambda functions that are invoked by Amazon S3 or Amazon API Gateway to analyze uploaded images with Amazon Rekognition and save picture labels to ElasticSearch (written in Kotlin)
Kotlin
378
star
77

amazon-sagemaker-workshop

Amazon SageMaker workshops: Introduction, TensorFlow in SageMaker, and more
Jupyter Notebook
378
star
78

serverless-rules

Compilation of rules to validate infrastructure-as-code templates against recommended practices for serverless applications.
Go
378
star
79

logstash-output-amazon_es

Logstash output plugin to sign and export logstash events to Amazon Elasticsearch Service
Ruby
374
star
80

kinesis-aggregation

AWS libraries/modules for working with Kinesis aggregated record data
Java
370
star
81

smithy-rs

Code generation for the AWS SDK for Rust, as well as server and generic smithy client generation.
Rust
369
star
82

syne-tune

Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
Python
363
star
83

aws-sdk-kotlin

Multiplatform AWS SDK for Kotlin
Kotlin
359
star
84

dynamodb-transactions

Java
354
star
85

amazon-kinesis-client-python

Amazon Kinesis Client Library for Python
Python
354
star
86

aws-serverless-data-lake-framework

Enterprise-grade, production-hardened, serverless data lake on AWS
Python
349
star
87

threat-composer

A simple threat modeling tool to help humans to reduce time-to-value when threat modeling
TypeScript
346
star
88

amazon-kinesis-agent

Continuously monitors a set of log files and sends new data to the Amazon Kinesis Stream and Amazon Kinesis Firehose in near-real-time.
Java
342
star
89

rds-snapshot-tool

The Snapshot Tool for Amazon RDS automates the task of creating manual snapshots, copying them into a different account and a different region, and deleting them after a specified number of days
Python
337
star
90

amazon-kinesis-scaling-utils

The Kinesis Scaling Utility is designed to give you the ability to scale Amazon Kinesis Streams in the same way that you scale EC2 Auto Scaling groups – up or down by a count or as a percentage of the total fleet. You can also simply scale to an exact number of Shards. There is no requirement for you to manage the allocation of the keyspace to Shards when using this API, as it is done automatically.
Java
333
star
91

amazon-kinesis-video-streams-producer-sdk-cpp

Amazon Kinesis Video Streams Producer SDK for C++ is for developers to install and customize for their connected camera and other devices to securely stream video, audio, and time-encoded data to Kinesis Video Streams.
C++
332
star
92

landing-zone-accelerator-on-aws

Deploy a multi-account cloud foundation to support highly-regulated workloads and complex compliance requirements.
TypeScript
330
star
93

route53-infima

Library for managing service-level fault isolation using Amazon Route 53.
Java
326
star
94

aws-automated-incident-response-and-forensics

326
star
95

mxboard

Logging MXNet data for visualization in TensorBoard.
Python
326
star
96

aws-sigv4-proxy

This project signs and proxies HTTP requests with Sigv4
Go
325
star
97

statelint

A Ruby gem that provides a command-line validator for Amazon States Language JSON files.
Ruby
324
star
98

graphstorm

Enterprise graph machine learning framework for billion-scale graphs for ML scientists and data scientists.
Python
317
star
99

ecs-nginx-reverse-proxy

Reference architecture for deploying Nginx on ECS, both as a basic static resource server, and as a reverse proxy in front of a dynamic application server.
Nginx
317
star
100

simplebeerservice

Simple Beer Service (SBS) is a cloud-connected kegerator that streams live sensor data to AWS.
JavaScript
316
star