• Stars
    star
    641
  • Rank 70,212 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

NASBench: A Neural Architecture Search Dataset and Benchmark

NASBench: A Neural Architecture Search Dataset and Benchmark

This repository contains the code used for generating and interacting with the NASBench dataset. The dataset contains 423,624 unique neural networks exhaustively generated and evaluated from a fixed graph-based search space.

Each network is trained and evaluated multiple times on CIFAR-10 at various training budgets and we present the metrics in a queriable API. The current release contains over 5 million trained and evaluated models.

Our paper can be found at:

NAS-Bench-101: Towards Reproducible Neural Architecture Search

If you use this dataset, please cite:

@InProceedings{pmlr-v97-ying19a,
    title =     {{NAS}-Bench-101: Towards Reproducible Neural Architecture Search},
    author =    {Ying, Chris and Klein, Aaron and Christiansen, Eric and Real, Esteban and Murphy, Kevin and Hutter, Frank},
    booktitle = {Proceedings of the 36th International Conference on Machine Learning},
    pages =     {7105--7114},
    year =      {2019},
    editor =    {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
    volume =    {97},
    series =    {Proceedings of Machine Learning Research},
    address =   {Long Beach, California, USA},
    month =     {09--15 Jun},
    publisher = {PMLR},
    url =       {http://proceedings.mlr.press/v97/ying19a.html},

Dataset overview

NASBench is a tabular dataset which maps convolutional neural network architectures to their trained and evaluated performance on CIFAR-10. Specifically, all networks share the same network "skeleton", which can be seen in Figure (a) below. What changes between different models is the "module", which is a collection of neural network operations linked in an arbitrary graph-like structure.

Modules are represented by directed acyclic graphs with up to 9 vertices and 7 edges. The valid operations at each vertex are "3x3 convolution", "1x1 convolution", and "3x3 max-pooling". Figure (b) below shows an Inception-like cell within the dataset. Figure (c) shows a high-level overview of how the interior filter counts of each module are computed.

There are exactly 423,624 computationally unique modules within this search space and each one has been trained for 4, 12, 36, and 108 epochs three times each (423K * 3 * 4 = ~5M total trained models). We report the following metrics:

  • training accuracy
  • validation accuracy
  • testing accuracy
  • number of parameters
  • training time

The scatterplot below shows a comparison of number of parameters, training time, and mean validation accuracy of models trained for 108 epochs in the dataset.

See our paper for more detailed information about the design of this search space, further implementation details, and more in-depth analysis.

Colab

You can directly use this dataset from Google Colaboratory without needing to install anything on your local machine. Click "Open in Colab" below:

Open In Colab

Setup

  1. Clone this repo.
git clone https://github.com/google-research/nasbench
cd nasbench
  1. (optional) Create a virtualenv for this library.
virtualenv venv
source venv/bin/activate
  1. Install the project along with dependencies.
pip install -e .

Note: the only required dependency is TensorFlow. The above instructions will install the CPU version of TensorFlow to the virtualenv. For other install options, see https://www.tensorflow.org/install/.

Download the dataset

The full dataset (which includes all 5M data points at all 4 epoch lengths):

https://storage.googleapis.com/nasbench/nasbench_full.tfrecord

Size: ~1.95 GB, SHA256: 3d64db8180fb1b0207212f9032205064312b6907a3bbc81eabea10db2f5c7e9c


Subset of the dataset with only models trained at 108 epochs:

https://storage.googleapis.com/nasbench/nasbench_only108.tfrecord

Size: ~499 MB, SHA256: 4c39c3936e36a85269881d659e44e61a245babcb72cb374eacacf75d0e5f4fd1

Using the dataset

Example usage (see example.py for a full runnable example):

# Load the data from file (this will take some time)
nasbench = api.NASBench('/path/to/nasbench.tfrecord')

# Create an Inception-like module (5x5 convolution replaced with two 3x3
# convolutions).
model_spec = api.ModelSpec(
    # Adjacency matrix of the module
    matrix=[[0, 1, 1, 1, 0, 1, 0],    # input layer
            [0, 0, 0, 0, 0, 0, 1],    # 1x1 conv
            [0, 0, 0, 0, 0, 0, 1],    # 3x3 conv
            [0, 0, 0, 0, 1, 0, 0],    # 5x5 conv (replaced by two 3x3's)
            [0, 0, 0, 0, 0, 0, 1],    # 5x5 conv (replaced by two 3x3's)
            [0, 0, 0, 0, 0, 0, 1],    # 3x3 max-pool
            [0, 0, 0, 0, 0, 0, 0]],   # output layer
    # Operations at the vertices of the module, matches order of matrix
    ops=[INPUT, CONV1X1, CONV3X3, CONV3X3, CONV3X3, MAXPOOL3X3, OUTPUT])

# Query this model from dataset, returns a dictionary containing the metrics
# associated with this model.
data = nasbench.query(model_spec)

See nasbench/api.py for more information, including the constraints on valid module matrices and operations.

Note: it is not required to use nasbench/api.py to work with this dataset, you can see how to parse the dataset files from the initializer inside nasbench/api.py and then interact the data however you'd like.

How the dataset was generated

The dataset generation code is provided for reference, but the dataset has already been fully generated.

The list of unique computation graphs evaluated in this dataset was generated via nasbench/scripts/generate_graphs.py. Each of these graphs was evaluated multiple times via nasbench/scripts/run_evaluation.py.

How to run the unit tests

Unit tests are included for some of the algorithmically complex parts of the code. The tests can be run directly via Python. Example:

python nasbench/tests/model_builder_test.py

Disclaimer

This is not an official Google product.

More Repositories

1

bert

TensorFlow code and pre-trained models for BERT
Python
37,769
star
2

google-research

Google Research
Jupyter Notebook
33,759
star
3

tuning_playbook

A playbook for systematically maximizing the performance of deep learning models.
26,593
star
4

vision_transformer

Jupyter Notebook
10,251
star
5

text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Python
6,099
star
6

arxiv-latex-cleaner

arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Python
5,233
star
7

simclr

SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Jupyter Notebook
3,937
star
8

multinerf

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
Python
3,612
star
9

timesfm

TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Python
3,576
star
10

scenic

Scenic: A Jax Library for Computer Vision Research and Beyond
Python
3,295
star
11

football

Check out the new game server:
Python
3,260
star
12

albert

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Python
3,209
star
13

frame-interpolation

FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Python
2,818
star
14

t5x

Python
2,656
star
15

electra

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Python
2,325
star
16

kubric

A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow.
Jupyter Notebook
2,312
star
17

big_vision

Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
Jupyter Notebook
2,219
star
18

uda

Unsupervised Data Augmentation (UDA)
Python
2,131
star
19

language

Shared repository for open-sourced projects from the Google AI Language team.
Python
1,605
star
20

pegasus

Python
1,600
star
21

dex-lang

Research language for array processing in the Haskell/ML family
Haskell
1,581
star
22

torchsde

Differentiable SDE solvers with GPU support and efficient sensitivity analysis.
Python
1,548
star
23

parti

1,538
star
24

big_transfer

Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
Python
1,504
star
25

FLAN

Python
1,460
star
26

robotics_transformer

Python
1,337
star
27

disentanglement_lib

disentanglement_lib is an open-source library for research on learning disentangled representations.
Python
1,311
star
28

multilingual-t5

Python
1,197
star
29

circuit_training

Python
1,151
star
30

tapas

End-to-end neural table-text understanding models.
Python
1,143
star
31

planet

Learning Latent Dynamics for Planning from Pixels
Python
1,134
star
32

mixmatch

Python
1,130
star
33

deduplicate-text-datasets

Rust
1,104
star
34

fixmatch

A simple method to perform semi-supervised learning with limited data.
Python
1,094
star
35

morph-net

Fast & Simple Resource-Constrained Learning of Deep Network Structure
Python
1,016
star
36

maxim

[CVPR 2022 Oral] Official repository for "MAXIM: Multi-Axis MLP for Image Processing". SOTA for denoising, deblurring, deraining, dehazing, and enhancement.
Python
996
star
37

deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Python
995
star
38

batch-ppo

Efficient Batched Reinforcement Learning in TensorFlow
Python
963
star
39

augmix

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Python
951
star
40

magvit

Official JAX implementation of MAGVIT: Masked Generative Video Transformer
Python
947
star
41

pix2seq

Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)
Jupyter Notebook
865
star
42

seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture.
Python
793
star
43

meta-dataset

A dataset of datasets for learning to learn from few examples
Jupyter Notebook
762
star
44

noisystudent

Code for Noisy Student Training. https://arxiv.org/abs/1911.04252
Python
751
star
45

rliable

[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
Jupyter Notebook
747
star
46

recsim

A Configurable Recommender Systems Simulation Platform
Python
739
star
47

jax3d

Python
733
star
48

long-range-arena

Long Range Arena for Benchmarking Efficient Transformers
Python
719
star
49

lottery-ticket-hypothesis

A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.
Python
706
star
50

federated

A collection of Google research projects related to Federated Learning and Federated Analytics.
Python
675
star
51

bleurt

BLEURT is a metric for Natural Language Generation based on transfer learning.
Python
651
star
52

prompt-tuning

Original Implementation of Prompt Tuning from Lester, et al, 2021
Python
642
star
53

neuralgcm

Hybrid ML + physics model of the Earth's atmosphere
Python
641
star
54

xtreme

XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 typologically diverse languages and includes nine tasks.
Python
631
star
55

lasertagger

Python
606
star
56

sound-separation

Python
603
star
57

pix2struct

Python
587
star
58

vmoe

Jupyter Notebook
569
star
59

dreamer

Dream to Control: Learning Behaviors by Latent Imagination
Python
568
star
60

robopianist

[CoRL '23] Dexterous piano playing with deep reinforcement learning.
Python
562
star
61

omniglue

Code release for CVPR'24 submission 'OmniGlue'
Python
561
star
62

fast-soft-sort

Fast Differentiable Sorting and Ranking
Python
561
star
63

ravens

Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Python
560
star
64

sam

Python
551
star
65

batch_rl

Offline Reinforcement Learning (aka Batch Reinforcement Learning) on Atari 2600 games
Python
521
star
66

bigbird

Transformers for Longer Sequences
Python
518
star
67

tensor2robot

Distributed machine learning infrastructure for large-scale robotics research
Python
483
star
68

byt5

Python
477
star
69

adapter-bert

Python
476
star
70

mint

Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.
Python
465
star
71

leaf-audio

LEAF is a learnable alternative to audio features such as mel-filterbanks, that can be initialized as an approximation of mel-filterbanks, and then be trained for the task at hand, while using a very small number of parameters.
Python
446
star
72

robustness_metrics

Jupyter Notebook
442
star
73

maxvit

[ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmentation, image quality, and generative modeling...
Jupyter Notebook
436
star
74

receptive_field

Compute receptive fields of your favorite convnets
Python
434
star
75

maskgit

Official Jax Implementation of MaskGIT
Jupyter Notebook
429
star
76

weatherbench2

A benchmark for the next generation of data-driven global weather models.
Python
420
star
77

l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
Python
408
star
78

distilling-step-by-step

Python
407
star
79

ssl_detection

Semi-supervised learning for object detection
Python
398
star
80

nerf-from-image

Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion
Python
377
star
81

computation-thru-dynamics

Understanding computation in artificial and biological recurrent networks through the lens of dynamical systems.
Jupyter Notebook
369
star
82

tf-slim

Python
368
star
83

realworldrl_suite

Real-World RL Benchmark Suite
Python
341
star
84

python-graphs

A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.
Python
325
star
85

rigl

End-to-end training of sparse deep neural networks with little-to-no performance loss.
Python
314
star
86

task_adaptation

Python
310
star
87

self-organising-systems

Jupyter Notebook
308
star
88

ibc

Official implementation of Implicit Behavioral Cloning, as described in our CoRL 2021 paper, see more at https://implicitbc.github.io/
Python
306
star
89

tensorflow_constrained_optimization

Python
300
star
90

syn-rep-learn

Learning from synthetic data - code and models
Python
294
star
91

arco-era5

Recipes for reproducing Analysis-Ready & Cloud Optimized (ARCO) ERA5 datasets.
Python
291
star
92

vdm

Jupyter Notebook
291
star
93

rlds

Jupyter Notebook
284
star
94

exoplanet-ml

Machine learning models and utilities for exoplanet science.
Python
283
star
95

retvec

RETVec is an efficient, multilingual, and adversarially-robust text vectorizer.
Jupyter Notebook
281
star
96

sparf

This is the official code release for SPARF: Neural Radiance Fields from Sparse and Noisy Poses [CVPR 2023-Highlight]
Python
279
star
97

tensorflow-coder

Python
275
star
98

lm-extraction-benchmark

Python
270
star
99

language-table

Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.
Jupyter Notebook
260
star
100

falken

Falken provides developers with a service that allows them to train AI that can play their games
Python
254
star