• Stars
    star
    2,634
  • Rank 17,319 (Top 0.4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Neural Network Libraries

Neural Network Libraries

Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. We aim to have it running everywhere: desktop PCs, HPC clusters, embedded devices and production servers.

Installation

Installing Neural Network Libraries is easy:

pip install nnabla

This installs the CPU version of Neural Network Libraries. GPU-acceleration can be added by installing the CUDA extension with following command.

pip install nnabla-ext-cuda110

Above command is for version 11.0 CUDA Toolkit.

The other supported CUDA packages are listed here.

CUDA ver.10.x, ver.9.x, ver.8.x are not supported now.

For more details, see the installation section of the documentation.

Building from Source

See Build Manuals.

Running on Docker

For details on running on Docker, see the installation section of the documentation.

Features

Easy, flexible and expressive

The Python API built on the Neural Network Libraries C++11 core gives you flexibility and productivity. For example, a two layer neural network with classification loss can be defined in the following 5 lines of codes (hyper parameters are enclosed by <>).

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF

x = nn.Variable(<input_shape>)
t = nn.Variable(<target_shape>)
h = F.tanh(PF.affine(x, <hidden_size>, name='affine1'))
y = PF.affine(h, <target_size>, name='affine2')
loss = F.mean(F.softmax_cross_entropy(y, t))

Training can be done by:

import nnabla.solvers as S

# Create a solver (parameter updater)
solver = S.Adam(<solver_params>)
solver.set_parameters(nn.get_parameters())

# Training iteration
for n in range(<num_training_iterations>):
    # Setting data from any data source
    x.d = <set data>
    t.d = <set label>
    # Initialize gradients
    solver.zero_grad()
    # Forward and backward execution
    loss.forward()
    loss.backward()
    # Update parameters by computed gradients
    solver.update()

The dynamic computation graph enables flexible runtime network construction. Neural Network Libraries can use both paradigms of static and dynamic graphs, both using the same API.

x.d = <set data>
t.d = <set label>
drop_depth = np.random.rand(<num_stochastic_layers>) < <layer_drop_ratio>
with nn.auto_forward():
    h = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), name='conv0'))
    for i in range(<num_stochastic_layers>):
        if drop_depth[i]:
            continue  # Stochastically drop a layer
        h2 = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), 
                                   name='conv%d' % (i + 1)))
        h = F.add2(h, h2)
    y = PF.affine(h, <target_size>, name='classification')
    loss = F.mean(F.softmax_cross_entropy(y, t))
# Backward computation (can also be done in dynamically executed graph)
loss.backward()

You can differentiate to any order with nn.grad.

import nnabla as nn
import nnabla.functions as F
import numpy as np

x = nn.Variable.from_numpy_array(np.random.randn(2, 2)).apply(need_grad=True)
x.grad.zero()
y = F.sin(x)
def grad(y, x, n=1):
    dx = [y]
    for _ in range(n):
        dx = nn.grad([dx[0]], [x])
    return dx[0]
dnx = grad(y, x, n=10)
dnx.forward()
print(np.allclose(-np.sin(x.d), dnx.d))
dnx.backward()
print(np.allclose(-np.cos(x.d), x.g))

# Show the registry status
from nnabla.backward_functions import show_registry
show_registry()

Command line utility

Neural Network Libraries provides a command line utility nnabla_cli for easier use of NNL.

nnabla_cli provides following functionality.

  • Training, Evaluation or Inference with NNP file.
  • Dataset and Parameter manipulation.
  • File format converter
    • From ONNX to NNP and NNP to ONNX.
    • From TensorFlow to NNP and NNP to TensorFlow.
    • From NNP to TFLite.
    • From ONNX or NNP to NNB or C source code.

For more details see Documentation

Portable and multi-platform

  • Python API can be used on Linux and Windows
  • Most of the library code is written in C++14, deployable to embedded devices

Extensible

  • Easy to add new modules like neural network operators and optimizers
  • The library allows developers to add specialized implementations (e.g., for FPGA, ...). For example, we provide CUDA backend as an extension, which gives speed-up by GPU accelerated computation.

Efficient

  • High speed on a single CUDA GPU
  • Memory optimization engine
  • Multiple GPU support

Documentation

https://nnabla.readthedocs.org

Getting started

  • A number of Jupyter notebook tutorials can be found in the tutorial folder. We recommend starting from by_examples.ipynb for a first working example in Neural Network Libraries and python_api.ipynb for an introduction into the Neural Network Libraries API.

  • We also provide some more sophisticated examples at nnabla-examples repository.

  • C++ API examples are available in examples/cpp.

Contribution guide

The technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework. NNabla is really nice in this point. The architecture of Neural Network Libraries is clean and quite simple. Also, you can add new features very easy by the help of our code template generating system. See the following link for details.

License & Notice

Neural Network Libraries is provided under the Apache License Version 2.0 license.

It also depends on some open source software packages. For more information, see LICENSES.

Citation

@misc{hayakawa2021neural,
      title={Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives}, 
      author={Takuya Narihira and Javier Alonsogarcia and Fabien Cardinaux and Akio Hayakawa
              and Masato Ishii and Kazunori Iwaki and Thomas Kemp and Yoshiyuki Kobayashi
              and Lukas Mauch and Akira Nakamura and Yukio Obuchi and Andrew Shin and Kenji Suzuki
              and Stephen Tiedmann and Stefan Uhlich and Takuya Yashima and Kazuki Yoshiyama},
      year={2021},
      eprint={2102.06725},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

More Repositories

1

sonyflake

A distributed unique ID generator inspired by Twitter's Snowflake
Go
3,484
star
2

gobreaker

Circuit Breaker implemented in Go
Go
2,606
star
3

flutter-embedded-linux

Embedded Linux embedding for Flutter
C++
995
star
4

flutter-elinux

Flutter tools for embedded Linux (eLinux)
Dart
411
star
5

v8eval

Multi-language bindings to JavaScript engine V8
C++
399
star
6

ai-research-code

Python
316
star
7

model_optimization

Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
Python
295
star
8

nnabla-examples

Neural Network Libraries https://nnabla.org/ - Examples
Python
280
star
9

easyhttpcpp

A cross-platform HTTP client library with a focus on usability and speed
C++
152
star
10

sqvae

Pytorch implementation of stochastically quantized variational autoencoder (SQ-VAE)
Python
132
star
11

mapray-js

JavaScript library for Interactive high quality 3D globes and maps in the browser
TypeScript
118
star
12

nmos-cpp

An NMOS (Networked Media Open Specifications) Registry and Node in C++ (IS-04, IS-05)
C++
113
star
13

nnabla-rl

Deep reinforcement learning library built on top of Neural Network Libraries
Python
107
star
14

nnabla-ext-cuda

A CUDA Extension of Neural Network Libraries
Cuda
89
star
15

DiffRoll

PyTorch implementation of DiffRoll, a diffusion-based generative automatic music transcription (AMT) model
Jupyter Notebook
69
star
16

creativeai

CSS
63
star
17

meta-flutter

Yocto recipes for Flutter Engine and custom embedders
BitBake
61
star
18

FxNorm-automix

FxNorm-Automix - Implementation of automatic music mixing systems. We show how we can use wet music data and repurpose it to train a fully automatic mixing system
Python
51
star
19

appsync-client-go

AWS AppSync golang client library
Go
46
star
20

nnabla-nas

Neural Architecture Search for Neural Network Libraries
Python
44
star
21

flutter-elinux-plugins

Flutter plugins for embedded Linux (eLinux)
C++
43
star
22

nnabla-c-runtime

Neural Network Libraries https://nnabla.org/ - C Runtime
C
38
star
23

huis-ui-creator

JavaScript
38
star
24

NDJIR

NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights, and Materials of Real Object
Python
36
star
25

timbre-trap

Code for the paper "Timbre-Trap: A Low-Resource Framework for Instrument-Agnostic Music Transcription"
Python
34
star
26

pyIEOE

Python
31
star
27

nmos-js

An NMOS (Networked Media Open Specifications) Client in Javascript (IS-04, IS-05)
JavaScript
27
star
28

openocd-nuttx

Fork of OpenOCD with NuttX thread support.
C
24
star
29

CLIPSep

Python
23
star
30

pdaf-library

C
22
star
31

cdp-js

Libraries/SDK modules for multi-platform application development
TypeScript
20
star
32

polar-densification

Python
17
star
33

cordova-plugin-cdp-nativebridge

JavaScript
16
star
34

audio-visual-seld-dcase2023

Baseline method for audio-visual sound event localization and detection task of DCASE 2023 challenge
Python
16
star
35

generator-cordova-plugin-devbed

JavaScript
14
star
36

nnc-plugin

Plugins for Neural Network Console (https://dl.sony.com/).
Python
14
star
37

dolp-colorconstancy

Python
11
star
38

typescript-fsa-redux-middleware

Fluent syntax for defining typesafe Redux vanilla middlewares on top of typescript-fsa.
TypeScript
9
star
39

cdn-purge-control-php

Multi CDN purge control library for PHP
PHP
8
star
40

micro-notifier

Simplified Pusher Clone
Go
8
star
41

nnabla-browser

Visualization toolkit for Neural Network Libraries
TypeScript
8
star
42

isren

JavaScript
8
star
43

pixel-guided-diffusion

Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models
Python
8
star
44

smarttennissensorsdk

The Smart Tennis Sensor plugs into the end of a tennis racket and records data about all the shots you make throughout a game or practice. With the SDK, you can develop apps for analyzing and presenting that data in real-time.
Java
8
star
45

cdp-cli

Command line tools for generating start point of multi-platform application development (Details: see cdp-js repository)
HTML
7
star
46

custom_layers

Python
7
star
47

mct_quantizers

Python
6
star
48

aibo-development-tutorial

6
star
49

smarttennissensormp4meta

Java
4
star
50

fp-diffusion

Jupyter Notebook
3
star
51

diffusion-timbre-transfer

Jupyter Notebook
3
star
52

node-win-usbdev

C++
3
star
53

evsCluster

Python scripts to process EVS (Event-based vision sensor) data
Python
3
star
54

Instruct3Dto3D-doc

Official documentation of Instruct 3D-to-3D
HTML
2
star
55

nnabla-js

TypeScript
1
star
56

nnabla-doc

1
star