đ Fast model deployment on any cloud
bentoctl helps deploy any machine learning models as production-ready API endpoints on the cloud, supporting AWS SageMaker, AWS Lambda, EC2, Google Compute Engine, Azure, Heroku and more.
đ Join our Slack community today!
⨠Looking deploy your ML service quickly? You can checkout BentoML Cloud for the easiest and fastest way to deploy your bento. It's a full featured, serverless environment with a model repository and built in monitoring and logging.
Highlights
- Framework-agnostic model deployment for Tensorflow, PyTorch, XGBoost, Scikit-Learn, ONNX, and many more via BentoML: the unified model serving framework.
- Simplify the deployment lifecycle of deploy, update, delete, and rollback.
- Take full advantage of BentoML's performance optimizations and cloud platform features out-of-the-box.
- Tailor bentoctl to your DevOps needs by customizing deployment operator and Terraform templates.
Getting Started
- đģ Quickstart Guide - Deploy your first model to AWS Lambda as a serverless API endpoint.
- đ Core Concepts - Learn the core concepts in bentoctl.
- đšī¸ Operators List - List of official operators and advanced configuration options.
- đŦ Join Community Slack - Get help from our community and maintainers.
Supported Platforms:
- AWS Lambda
- AWS SageMaker
- AWS EC2
- Google Cloud Run
- Google Compute Engine
- Azure Container Instances
- Heroku
Upcoming
Custom Operator
Users can built custom bentoctl plugin from the deployment operator template to deploy to cloud platforms not yet supported or to internal infrastructure.
If you are looking for deploying with Kubernetes, check out Yatai: Model deployment at scale on Kubernetes.
Installation
pip install bentoctl
| đĄ bentoctl designed to work with BentoML version 1.0.0 and above. For BentoML 0.13 or below, you can use the pre-v1.0
branch in the operator repositories and follow the instruction in the README. You can also check out the quickstart guide for 0.13 here.
Community
- To report a bug or suggest a feature request, use GitHub Issues.
- For other discussions, use Github Discussions under the BentoML repo
- To receive release announcements and get support, join us on Slack.
Contributing
There are many ways to contribute to the project:
- Create and share new operators. Use deployment operator template to get started.
- If you have any feedback on the project, share it with the community in Github Discussions under the BentoML repo.
- Report issues you're facing and "Thumbs up" on issues and feature requests that are relevant to you.
- Investigate bugs and reviewing other developer's pull requests.
Usage Reporting
BentoML and bentoctl collects usage data that helps our team to
improve the product. Only bentoctl's CLI commands calls are being reported. We
strip out as much potentially sensitive information as possible, and we will
never collect user code, model data, model names, or stack traces. Here's the
code for usage tracking. You can opt-out of
usage tracking by setting environment variable BENTOML_DO_NOT_TRACK=True
:
export BENTOML_DO_NOT_TRACK=True