• This repository has been archived on 30/Apr/2024
  • Stars
    star
    198
  • Rank 195,747 (Top 4 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 9 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Optimization Examples with SigOpt

image

Getting Started with SigOpt

Welcome to the SigOpt Examples. These examples show you how to use SigOpt for model tuning tasks in various machine learning environments.

Requirements

Most of these examples will run on any Linux or Mac OS X machine from the command line. Each example contains a README.md with specific setup instructions.

Questions?

Visit the SigOpt Community page and leave your questions.

API Reference

To implement SigOpt for your use case, feel free to use or extend the code in this repository. Our API can bolt on top of any complex model or process and guide it to its optimal configuration in as few iterations as possible.

About SigOpt

With SigOpt, data scientists and machine learning engineers can build better models with less trial and error.

Machine learning models depend on hyperparameters that trade off bias/variance and other key outcomes. SigOpt provides Bayesian hyperparameter optimization using an ensemble of the latest research.

SigOpt can tune any machine learning model, including popular techniques like gradient boosting, deep neural networks, and support vector machines. SigOpt’s REST API and client libraries (Python, R, Java) integrate into any existing ML workflow.

SigOpt augments your existing model training pipeline, suggesting parameter configurations to maximize any online or offline objective, such as AUC ROC, model accuracy, or revenue. You only send SigOpt your metadata, not the underlying training data or model.

Visit our website to learn more!