• Stars
    star
    396
  • Rank 108,801 (Top 3 %)
  • Language
    Jupyter Notebook
  • License
    Apache License 2.0
  • Created about 5 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Parallel Hyperparameter Tuning in Python

Mango: A parallel hyperparameter tuning library

Mango is a python library to find the optimal hyperparameters for machine learning classifiers. Mango enables parallel optimization over complex search spaces of continuous/discrete/categorical values.

Check out the quick 12 seconds demo of Mango approximating a complex decision boundary of SVM

AirSim Drone Demo Video

Mango has the following salient features:

  • Easily define complex search spaces compatible with the scikit-learn.
  • A novel state-of-the-art gradient-free optimizer for continuous/discrete/categorical values.
  • Modular design to schedule objective function on local, cluster, or cloud infrastructure.
  • Failure detection in the application layer for scalability on commodity hardware.
  • New features are continuously added due to the testing and usage in production settings.

Index

  1. Installation
  2. Getting started
  3. Hyperparameter tuning example
  4. Search space definitions
  5. Scheduler
  6. Optional configurations
  7. Additional features
  8. CASH feature
  9. Platform-aware neural architecture search
  10. Mango introduction slides & Mango production usage slides.
  11. Core Mango research papers to cite and novel applications built over Mango

1. Installation

Using pip:

pip install arm-mango

From source:

$ git clone https://github.com/ARM-software/mango.git
$ cd mango
$ pip3 install .

2. Getting Started

Mango is straightforward to use. Following example minimizes the quadratic function whose input is an integer between -10 and 10.

from mango import scheduler, Tuner

# Search space
param_space = dict(x=range(-10,10))

# Quadratic objective Function
@scheduler.serial
def objective(x):
    return x * x

# Initialize and run Tuner
tuner = Tuner(param_space, objective)
results = tuner.minimize()

print(f'Optimal value of parameters: {results["best_params"]} and objective: {results["best_objective"]}')```
# => Optimal value of parameters: {'x': 0}  and objective: 0

3. Hyperparameter Tuning Example

from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score

from mango import Tuner, scheduler

# search space for KNN classifier's hyperparameters
# n_neighbors can vary between 1 and 50, with different choices of algorithm
param_space = dict(n_neighbors=range(1, 50),
                   algorithm=['auto', 'ball_tree', 'kd_tree', 'brute'])


@scheduler.serial
def objective(**params):
    X, y = datasets.load_breast_cancer(return_X_y=True)
    clf = KNeighborsClassifier(**params)
    score = cross_val_score(clf, X, y, scoring='accuracy').mean()
    return score


tuner = Tuner(param_space, objective)
results = tuner.maximize()
print('best parameters:', results['best_params'])
print('best accuracy:', results['best_objective'])
# => best parameters: {'algorithm': 'ball_tree', 'n_neighbors': 11}
# => best accuracy: 0.9332401800962584

Note that best parameters may be different but accuracy should be ~ 0.93. More examples are available in the examples directory (Facebook's Prophet, XGBoost, SVM).

4. Search Space

The search space defines the range and distribution of input parameters to the objective function. Mango search space is compatible with scikit-learn's parameter space definitions used in RandomizedSearchCV or GridSearchCV. The search space is defined as a dictionary with keys being the parameter names (string) and values being list of discreet choices, range of integers or the distributions. Example of some common search spaces are:

Integer

Following space defines x as an integer parameters with values in range(-10, 11) (11 is not included):

param_space = dict(x=range(-10, 11)) #=> -10, -9, ..., 10
# you can use steps for sparse ranges
param_space = dict(x=range(0, 101, 10)) #=> 0, 10, 20, ..., 100

Integers are uniformly sampled from the given range and are assumed to be ordered and treated as continuous variables.

Categorical

Discreet categories can be defined as lists. For example:

# string
param_space = dict(color=['red', 'blue', 'green'])
# float
param_space = dict(v=[0.2, 0.1, 0.3])
# mixed
param_space = dict(max_features=['auto', 0.2, 0.3])

Lists are uniformly sampled and are assumed to be unordered. They are one-hot encoded internally.

Distributions

All the distributions supported by scipy.stats are supported. In general, distributions must provide a rvs method for sampling.

Uniform distribution

Using uniform(loc, scale) one obtains the uniform distribution on [loc, loc + scale].

from scipy.stats import uniform

# uniformly distributed between -1 and 1
param_space = dict(a=uniform(-1, 2))

Log uniform distribution

We have added loguniform distribution by extending the scipy.stats.distributions constructs. Using loguniform(loc, scale) one obtains the loguniform distribution on [10loc, 10loc + scale].

from mango.domain.distribution import loguniform

# log uniformly distributed between 10^-3 and 10^-1
param_space = dict(learning_rate=loguniform(-3, 2))

Hyperparameter search space examples

Example hyperparameter search space for Random Forest Classifier:

param_space =  dict(
    max_features=['sqrt', 'log2', .1, .3, .5, .7, .9],
    n_estimators=range(10, 1000, 50), # 10 to 1000 in steps of 50
    bootstrap=[True, False],
    max_depth=range(1, 20),
    min_samples_leaf=range(1, 10)
)

Example search space for XGBoost Classifier:

from scipy.stats import uniform
from mango.domain.distribution import loguniform

param_space = {
    'n_estimators': range(10, 2001, 100), # 10 to 2000 in steps of 100
    'max_depth': range(1, 15), # 1 to 14
    'reg_alpha': loguniform(-3, 6),  # 10^-3 to 10^3
    'booster': ['gbtree', 'gblinear'],
    'colsample_bylevel': uniform(0.05, 0.95), # 0.05 to 1.0
    'colsample_bytree': uniform(0.05, 0.95), # 0.05 to 1.0
    'learning_rate': loguniform(-3, 3),  # 0.001 to 1
    'reg_lambda': loguniform(-3, 6),  # 10^-3 to 10^3
    'min_child_weight': loguniform(0, 2), # 1 to 100
    'subsample': uniform(0.1, 0.89) # 0.1 to 0.99
}

Example search space for SVM:

from scipy.stats import uniform
from mango.domain.distribution import loguniform

param_dict = {
    'kernel': ['rbf', 'sigmoid'],
    'gamma': uniform(0.1, 4), # 0.1 to 4.1
    'C': loguniform(-7, 8) # 10^-7 to 10
}

5. Scheduler

Mango is designed to take advantage of distributed computing. The objective function can be scheduled to run locally or on a cluster with parallel evaluations. Mango is designed to allow the use of any distributed computing framework (like Celery or Kubernetes). The scheduler module comes with some pre-defined schedulers.

Serial scheduler

Serial scheduler runs locally with one objective function evaluation at a time

from mango import scheduler

@scheduler.serial
def objective(x):
    return x * x

Parallel scheduler

Parallel scheduler runs locally and uses joblib to evaluate the objective functions in parallel

from mango import scheduler

@scheduler.parallel(n_jobs=2)
def objective(x):
    return x * x

n_jobs specifies the number of parallel evaluations. n_jobs = -1 uses all the available cpu cores on the machine. See simple_parallel for full working example.

Custom distributed scheduler

Users can define their own distribution strategies using custom scheduler. To do so, users need to define an objective function that takes a list of parameters and returns the list of results:

from mango import scheduler

@scheduler.custom(n_jobs=4)
def objective(params_batch):
    """ Template for custom distributed objective function
    Args:
        params_batch (list): Batch of parameter dictionaries to be evaluated in parallel

    Returns:
        list: Values of objective function at given parameters
    """
    # evaluate the objective on a distributed framework
    ...
    return results

For example the following snippet uses Celery:

import celery
from mango import Tuner, scheduler

# connect to celery backend
app = celery.Celery('simple_celery', backend='rpc://')

# remote celery task
@app.task
def remote_objective(x):
    return x * x

@scheduler.custom(n_jobs=4)
def objective(params_batch):
    jobs = celery.group(remote_objective.s(params['x']) for params in params_batch)()
    return jobs.get()

param_space = dict(x=range(-10, 10))

tuner = Tuner(param_space, objective)
results = tuner.minimize()

A working example to tune hyperparameters of KNN using Celery is here.

6. Optional configurations

The default configuration parameters used by the Mango as below:

{'param_dict': ...,
 'userObjective': ...,
 'domain_size': 5000,
 'initial_random': 1,
 'num_iteration': 20,
 'batch_size': 1}

The configuration parameters are:

  • domain_size: The size which is explored in each iteration by the gaussian process. Generally, a larger size is preferred if higher dimensional functions are optimized. More on this will be added with details about the internals of bayesian optimization.
  • initial_random: The number of random samples tried. Note: Mango returns all the random samples together. Users can exploit this to parallelize the random runs without any constraint.
  • num_iteration: The total number of iterations used by Mango to find the optimal value.
  • batch_size: The size of args_list passed to the objective function for parallel evaluation. For larger batch sizes, Mango internally uses intelligent sampling to decide the optimal samples to evaluate.
  • early_stopping: A Callable to specify custom stopping criteria. The callback has the following signature:
    def early_stopping(results):
       '''
           results is the same as dict returned by tuner
           keys available: params_tries, objective_values,
               best_objective, best_params
       '''
       ...
       return True/False
    Early stopping is one of Mango's important features that allow to early terminate the current parallel search based on the custom user-designed criteria, such as the total optimization time spent, current validation accuracy achieved, or improvements in the past few iterations. For usage see early stopping examples notebook.
  • constraint: A callable to specify constraints on parameter space. It has the following signature:
    def constraint(samples: List[dict]) -> List[bool]:
      '''
          Given a list of samples (each sample is a dict with parameter names as keys)
          Returns a list of True/False elements indicating whether the corresponding sample
          satisfies the constraints or not
      '''
    See this notebook for an example.
  • initial_custom: A list of initial evaluation points to warm up the optimizer instead of random sampling. For example, for a search space with two parameters x1 and x2 the input could be: [{'x1': 10, 'x2': -5}, {'x1': 0, 'x2': 10}]. This allows the user to customize the initial evaluation points and therefore guide the optimization process. If this option is given then initial_random is ignored.

The default configuration parameters can be modified, as shown below. Only the parameters whose values need to adjusted can be passed as the dictionary.

conf_dict = dict(num_iteration=40, domain_size=10000, initial_random=3)

tuner = Tuner(param_dict, objective, conf_dict)

7. Additional Features

Handling runtime failed evaluation

At runtime, failed evaluations are widespread in production deployments. Mango abstractions enable users to make progress even in the presence of failures by only using the correct evaluations. The syntax can return the successful evaluation, and the user can flexibly keep track of failures, for example, using timeouts. Examples showing the usage of Mango in the presence of failures: serial execution and parallel execution

Neural Architecture Search

Mango can also do an efficient neural architecture search. An example on the MNIST dataset to search for optimal filter sizes, the number of filters, etc., is available.

More extensive examples are available in the THIN-Bayes folder doing Neural Architecture Search for a class of neural networks and classical models for different regression and classification tasks.

8. Combiner Classifier Selection and Optimization (CASH)

Mango now provides a novel functionality of combined classifier selection and optimization. It allows developers to directly specify a set of classifiers along with their different hyperparameter spaces. Mango internally finds the best classifier along with the optimal parameters with the least possible number of overall iterations. The examples are available here

The important parts in the skeletion code are as below.

from mango import MetaTuner

#define search spaces and objective functions as done for tuner.

param_space_list = [param_space1, param_space2, param_space3, param_space4, ..]
objective_list = [objective_1, objective_2, objective_3, objective_4, ..]

metatuner = MetaTuner(param_space_list, objective_list)

results = metatuner.run()

print('best_objective:',results['best_objective'])
print('best_params:',results['best_params'])
print('best_objective_fid:',results['best_objective_fid'])

Participate

Core Papers to Cite Mango

More technical details are available in the Mango paper-1 (ICASSP 2020) and Mango paper-2 (CogMI 2021) Please cite them as:

@inproceedings{sandha2020mango,
  title={Mango: A Python Library for Parallel Hyperparameter Tuning},
  author={Sandha, Sandeep Singh and Aggarwal, Mohit and Fedorov, Igor and Srivastava, Mani},
  booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={3987--3991},
  year={2020},
  organization={IEEE}
}
@inproceedings{sandha2021mango,
  title={Enabling Hyperparameter Tuning of Machine Learning Classifiers in Production},
  author={Sandha, Sandeep Singh and Aggarwal, Mohit and Saha, Swapnil Sayan and Srivastava, Mani},
  booktitle={CogMI 2021, IEEE International Conference on Cognitive Machine Intelligence},
  year={2021},
  organization={IEEE}
}

Novel Applications built over Mango

@article{saha2022auritus,
  title={Auritus: An open-source optimization toolkit for training and development of human movement models and filters using earables},
  author={Saha, Swapnil Sayan and Sandha, Sandeep Singh and Pei, Siyou and Jain, Vivek and Wang, Ziqi and Li, Yuchen and Sarker, Ankur and Srivastava, Mani},
  journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
  volume={6},
  number={2},
  pages={1--34},
  year={2022},
  publisher={ACM New York, NY, USA}
}
@article{saha2022tinyodom,
  title={Tinyodom: Hardware-aware efficient neural inertial navigation},
  author={Saha, Swapnil Sayan and Sandha, Sandeep Singh and Garcia, Luis Antonio and Srivastava, Mani},
  journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
  volume={6},
  number={2},
  pages={1--32},
  year={2022},
  publisher={ACM New York, NY, USA}
}
@article{saha2022thin,
  title={THIN-Bayes: Platform-Aware Machine Learning for Low-End IoT Devices},
  author={Saha, Swapnil Sayan and Sandha, Sandeep Singh and Aggarwal, Mohit and Srivastava, Mani},
  year={2022}
}

Slides

Slides explaining Mango abstractions and design choices are available. Mango Slides-1, Mango Slides-2.

Contribute

Please take a look at open issues if you are looking for areas to contribute to.

Questions

For any questions feel free to reach out by creating an issue here.

More Repositories

1

ComputeLibrary

The Compute Library is a set of computer vision and machine learning functions optimised for both Arm CPUs and GPUs using SIMD technologies.
C++
2,539
star
2

arm-trusted-firmware

Read-only mirror of Trusted Firmware-A
C
1,690
star
3

CMSIS_5

CMSIS Version 5 Development Repository
C
1,327
star
4

armnn

Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn
C++
1,162
star
5

ML-KWS-for-MCU

Keyword spotting on Arm Cortex-M Microcontrollers
C
1,040
star
6

astc-encoder

The Arm ASTC Encoder, a compressor for the Adaptive Scalable Texture Compression data format.
C
880
star
7

abi-aa

Application Binary Interface for the Arm® Architecture
HTML
673
star
8

vulkan_best_practice_for_mobile_developers

Vulkan best practice for mobile developers
C++
564
star
9

CMSIS-FreeRTOS

FreeRTOS adaptation for CMSIS-RTOS Version 2
C
502
star
10

optimized-routines

Optimized implementations of various library functions for ARM architecture processors
C
486
star
11

CMSIS_4

Cortex Microcontroller Software Interface Standard (V4 no longer maintained)
C
451
star
12

ML-examples

Arm Machine Learning tutorials and examples
C++
371
star
13

LLVM-embedded-toolchain-for-Arm

A project dedicated to building LLVM toolchain for 32-bit Arm embedded targets.
CMake
331
star
14

opengl-es-sdk-for-android

OpenGL ES SDK for Android
CSS
325
star
15

SCALE-Sim

Python
296
star
16

Arm-2D

2D Graphic Library optimized for Cortex-M processors
C
295
star
17

CMSIS-DSP

CMSIS-DSP embedded compute library for Cortex-M and Cortex-A
C
277
star
18

Tool-Solutions

Tutorials & examples for Arm software development tools.
C
217
star
19

EndpointAI

C++
216
star
20

SCP-firmware

Read-only mirror of System Control Processor (SCP) firmware
C
205
star
21

vulkan-sdk

Github repository for the Vulkan SDK
C
199
star
22

lisa

Linux Integrated System Analysis
Jupyter Notebook
192
star
23

HWCPipe

Hardware counters interface
C++
188
star
24

u-boot

Clone of upstream U-Boot repo with patches for Arm development boards
C
177
star
25

CMSIS-NN

CMSIS-NN Library
C
173
star
26

CMSIS-Driver

Repository of microcontroller peripheral driver implementing the CMSIS-Driver API specification
C
165
star
27

android-nn-driver

C++
151
star
28

CMSIS_6

CMSIS version 6 (successor of CMSIS_5)
C
149
star
29

ML-zoo

Python
149
star
30

workload-automation

A framework for automating workload execution and measurement collection on ARM devices.
Python
138
star
31

gator

Sources for Arm Streamline's gator daemon
C++
121
star
32

keyword-transformer

Official implementation of the Keyword Transformer: https://arxiv.org/abs/2104.00769
Jupyter Notebook
116
star
33

ebbr

Embedded Base Boot Requirements Specification
PostScript
113
star
34

perfdoc

A cross-platform Vulkan layer which checks Vulkan applications for best practices on Arm Mali devices.
C++
112
star
35

linux

C
95
star
36

asl-interpreter

Example implementation of Arm's Architecture Specification Language (ASL)
OCaml
94
star
37

MDK-Middleware

MDK-Middleware (file system, network and USB components) source code for Arm Cortex-M using CMSIS-Drivers and CMSIS-RTOS2 APIs.
C
93
star
38

sbsa-acs

ARM Enterprise: SBSA Architecture Compliance Suite
C
88
star
39

sesr

Super-Efficient Super Resolution
Python
87
star
40

mobile-studio-integration-for-unity

Mobile Studio tool integration with C# scripting for the Unity game engine.
C
86
star
41

CSAL

Coresight Access Library
C
78
star
42

progress64

PROGRESS64 is a C library of scalable functions for concurrent programs, primarily focused on networking applications.
C
70
star
43

psa-arch-tests

Tests for verifying implementations of TBSA-v8M and the PSA Certified APIs
C
66
star
44

CMSIS-RTX

RTX5 real time kernel for Arm Cortex-based embedded systems (spin-off from CMSIS_5)
C
64
star
45

Cloud-IoT-Core-Kit-Examples

Example projects and code are supplied to support the Arm-based IoT Kit for Cloud IoT Core
Python
62
star
46

developer

GTM related documentation
C++
61
star
47

cmsis-pack-eclipse

CMSIS-Pack Eclipse Plug-ins
Java
60
star
48

trappy

This repository has moved to https://gitlab.arm.com/tooling/trappy
Python
60
star
49

ethos-n-driver-stack

Driver stack (including user space libraries, kernel module and firmware) for the Arm® Ethos™-N NPU
C++
59
star
50

AVH-GetStarted

DEPRECATED - use instead AVH_CI_Template
C
58
star
51

CMSIS-CV

Computer Vision library for IoT
C++
54
star
52

acle

Arm C Language Extensions (ACLE)
Python
52
star
53

arm-systemready

Arm SystemReady
Shell
52
star
54

patrace

C++
52
star
55

tarmac-trace-utilities

Tools for analyzing and browsing Tarmac instruction traces.
C++
47
star
56

devlib

Library for interaction with and instrumentation of remote devices.
Python
47
star
57

speculation-barrier

This project provides a header file which contains wrapper macros for the __builtin_load_no_speculate builtin function defined at https://www.arm.com/security-update This builtin function defines a speculation barrier, which can be used to limit the conditions under which a value which has been loaded can be used under speculative execution.
Objective-C
44
star
58

arm-enterprise-acs

ARM Enterprise ACS
C
42
star
59

DeepFreeze

SystemVerilog
38
star
60

tf-issues

Issue tracking for the ARM Trusted Firmware project
36
star
61

scalpel

This is a PyTorch implementation of the Scalpel. Node pruning for five benchmark networks and SIMD-aware weight pruning for LeNet-300-100 and LeNet-5 is included.
Python
35
star
62

psa-api

Documentation source and development of the PSA Certified API
C
34
star
63

TZ-TRNG

TrustZone True Number Generator
C
33
star
64

AVH

AVH-FVP: Arm Virtual Hardware - Fixed Virtual Platform
C
32
star
65

CMSIS-View

Repository of CMSIS Software Pack for software event generation and input/output handling.
Go
32
star
66

perf-libs-tools

C
31
star
67

bob-build

Meta-build system using Blueprint and ninja
Go
30
star
68

CMSIS-DAP

CoreSight Debug Access Port (DAP) debug probe protocol reference implementation (spin-off from CMSIS_5)
C
30
star
69

mram_simulation_framework

MRAM magnetization simulation framework. s-LLGS python and verilog-a solvers for transients simulation and Fokker-planck equation solver for stochastic analysis
Python
28
star
70

bento-linker

A light-weight alternative to processes for microcontrollers.
C
27
star
71

toolchain-gnu-bare-metal

A toolchain sub-project dedicated to build GNU toolchain for 32-bit bare-metal targets
Shell
26
star
72

data

Machine-readable data describing Arm architecture and implementations. Includes JSON descriptions of implemented PMU events.
26
star
73

synchronization-benchmarks

Collection of synchronization micro-benchmarks and traces from infrastructure applications
C
26
star
74

libGPUInfo

A utility library for application developers to query the configuration of the Arm Immortalis GPU or Arm Mali GPU present in their system.
C++
24
star
75

cryptocell-312-runtime

CryptoCell 312 runtime code
C
24
star
76

CMSIS-Compiler

CMSIS Compiler support for Arm Compiler
C
24
star
77

vscode-cmsis-csolution

Extension support for VS Code CMSIS Project Extension
24
star
78

libddssec

DDS Security library - Project moved to https://gitlab.arm.com/libraries/libddssec
C
23
star
79

NXP_LPC

CMSIS Driver Implementations for the NXP LPC Microcontroller Series
C
23
star
80

golang-utils

Helpers and utilities for Golang in order to do actions not available in the standard library.
Go
23
star
81

AArch64cryptolib

AArch64cryptolib is a from scratch implementation of cryptographic primitives aiming for optimal performance on Arm A-class cores
C
23
star
82

AVH-TFLmicrospeech

Example: Micro speech for TensorFlow Lite
C
22
star
83

Shackleton-Framework

A generic genetic programming framework that aims to make genetic programming easier for a myriad of uses. Currently, the main target is to use the framework for code optimization in tandem with the LLVM framework.
C
22
star
84

CMSIS-Stream

CMSIS-Stream software component
Python
21
star
85

bart

Behavioural Analysis and Regression Toolkit
Python
20
star
86

PAF

PAF (the Physical Attack Framework) is a framework for analyzing physical attacks: fault injection and side channels
C++
20
star
87

HPCG_for_Arm

C++
20
star
88

armnn-mlperf

Arm mlperf.org benchmark port
C++
20
star
89

coresight-wire-protocol

Coresight Wire Protocol (CSWP) Server/Client and streaming trace examples.
HTML
18
star
90

ATP-Engine

C++
18
star
91

bsa-acs

Arm SystemReady : BSA Architecture Compliance Suite
C
17
star
92

ATS-Keyword

Smart Home Total Solution - Keyword Recognition
C
17
star
93

open-iot-sdk

Open-IoT-SDK - Home of the Total Solution applications.
C
16
star
94

vscode-keil-studio-pack

Extension pack for all VS Code extensions
16
star
95

CMSIS-RTOS2_Validation

Validation test suite for CMSIS-RTOS2 API implementations using Arm Virtual Hardware (AVH).
C
16
star
96

vr-sdk-for-android

VR SDK for Android
CSS
16
star
97

meabo

Multi-purpose multi-phase micro-benchmark
C
15
star
98

avhclient

Arm Virtual Hardware Client
Python
15
star
99

CMSIS-Driver_Validation

Test suite for verifying CMSIS-Driver implementations.
C
15
star
100

Methodology_for_ArmIE_SVE

C++
15
star