• Stars
    star
    105
  • Rank 326,345 (Top 7 %)
  • Language
    Python
  • License
    MIT License
  • Created over 9 years ago
  • Updated over 7 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

We use a modified neural network instead of Gaussian process for Bayesian optimization.

Adaptive Neural Network Representations for Parallel and Scalable Bayesian Optimization

Neural Network Bayesian Optimization is function optimization technique inpsired by the work of:

Jasper Snoek, et al
Scalable Bayesian Optimization Using Deep Neural Networks
http://arxiv.org/abs/1502.05700

This repository contains the python code written by James Brofos and Rui Shu of a modified approach that continually retrains the neural network underlying the optimization technique, and implements the technique within a parallelized setting for improved speed performance.

Motivation

The success of most machine learning algorithms is dependent the proper tuning of the hyperparameters. A popular technique for hyperparameter tuning is Bayesian optimization, which canonically uses a Gaussian process to interpolate the hyperparameter space. The computation time for GP-based Bayesian optimization, however, grows rapidly with respect to sample size (the number of tested hyperparameters) and quickly becomes very time consuming, if not all together intractable. Fortunately, a neural network is capable of mimicking the behavior of a Guassian process whilst providing a significant reduction in computation time.

Dependencies

This code requires:

Code Execution

To run the code from the home directory in parallel with 4 cores, simply call mpiexec:

mpiexec -np 4 python -m mpi.mpi_optimizer

To run a sequential version of the code:

python -m sequential.seq_optimizer

To run the gaussian process version of Bayesian optimization:

python -m sequential.seq_gaussian_process

Sample output:

Randomly query a set of initial points...  Complete initial dataset acquired
Performing optimization... 
0.100 completion...
0.200 completion...
0.300 completion...
0.400 completion...
0.500 completion...
0.600 completion...
0.700 completion...
0.800 completion...
0.900 completion...
1.000 completion...
Sequential gp optimization task complete.
Best evaluated point is:
[-0.31226245  3.80792522]
Predicted best point is:
[-0.31226245  3.7755048 ]

Note: The code, as written, focuses the use of the algorithm on any black-box function. A few common functions are available in learning_objective. The chosen function is set in hidden_function.py. To really appreciate the time-savings gained by the parallelized code, it is important to realize that evaluating a real-world black-box function (i.e. computing the test performance for an ML algorithm with a given set of hyperparameters) takes time.

This can be simulated by uncommenting the line: # time.sleep(2) in hidden_function.py.

More Repositories

1

vae-clustering

Unsupervised clustering with (Gaussian mixture) VAEs
Jupyter Notebook
287
star
2

dirt-t

A DIRT-T Approach to Unsupervised Domain Adaptation (ICLR 2018)
Python
174
star
3

cvae

Conditional variational autoencoder implementation in Torch
Jupyter Notebook
102
star
4

tensorsketch

A lightweight library for tensorflow 2.0
Python
66
star
5

vae-experiments

Code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. Atari video prediction is work-in-progress.
Lua
62
star
6

micro-projects

A collection of small code snippets for learning how to code
Jupyter Notebook
58
star
7

tensorbayes

Deep variational inference in tensorflow
Python
56
star
8

began

Boundary equilibrium GAN implementation in Tensorflow
Python
15
star
9

kaos

Deep variational inference library for Keras
Python
15
star
10

fast-style-transfer

Fast style transfer in TensorFlow
Python
14
star
11

tensorflow-gp

Implementation of gaussian processes and bayesian optimization in tensorflow
Jupyter Notebook
11
star
12

one-bit-vae

A silly and weirdly useful experiment where I attempt to encode one bit of information with a VAE
Jupyter Notebook
11
star
13

variational-autoencoder

Basic implementation of variational autoencoders in Torch
Jupyter Notebook
9
star
14

acgan-biased

Experiments verifying that AC-GAN downsamples points near decision boundary (NIPS BDL 2017)
Python
9
star
15

deep-generative-models

Deep generative models in Tensorflow
Python
6
star
16

ConvFeFe

The best neural network
Python
4
star
17

bcde

Bottleneck Conditional Density Estimation (ICML 2017)
Python
4
star
18

vda-hax

Simple tricks to improve visual domain adaptation for MNIST -> SVHN
Python
3
star