• This repository has been archived on 23/Nov/2022
  • Stars
    star
    392
  • Rank 109,735 (Top 3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 6 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A multi-user, distributed computing environment for running DL model training experiments on Intel® Xeon® Scalable processor-based systems

IMPORTANT:

Intel has decided to stop further development of Nauta and will no longer be supporting the product. We appreciate your involvement with the product.

Nauta

Nauta Diagram

See the docs at: https://intelai.github.io/nauta/

The Nauta software provides a multi-user, distributed computing environment for running deep learning model training experiments. Results of experiments, can be viewed and monitored using a command line interface, web UI and/or TensorBoard*. You can use existing data sets, use your own data, or downloaded data from online sources, and create public or private folders to make collaboration among teams easier.

Nauta runs using the industry leading Kubernetes* and Docker* platform for scalability and ease of management. Template packs for various DL frameworks and tooling are available (and customizable) on the platform to take the complexities out of creating and running single and multi-node deep learning training experiments without all the systems overhead and scripting needed with standard container environments.

To test your model, Nauta also supports both batch and streaming inference, all in a single platform.

To build Nauta installation package and run it smoothly on Google Cloud Platform please follow our Nauta on Google Cloud Platform - Getting Started. More details on building Nauta artifacts can be found in How to Build guide.

To get things up and running quickly please take a look at our Getting Started guide.

For more in-depth information please refer to the following documents:

License

By contributing to the project software, you agree that your contributions will be licensed under the Apache 2.0 license that is included in the LICENSE file in the root directory of this source tree. The user materials are licensed under CC-BY-ND 4.0.

Contact

Submit Github issue to ask a question, submit a request or report a bug.

More Repositories

1

models

Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs
Python
652
star
2

unet

U-Net Biomedical Image Segmentation
Jupyter Notebook
289
star
3

cerl

Python
72
star
4

vck

Volume Controller for Kubernetes
Go
67
star
5

tools

Python
58
star
6

inference-model-manager

Inference Model Manager for Kubernetes
Python
46
star
7

intel-xai-tools

Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
HTML
34
star
8

experiments

Experiments API for Experiment Tracking on Kubernetes
Python
27
star
9

nodus

Simulated large clusters for Kubernetes scheduler validation.
Go
15
star
10

openseismic

Open Seismic is an open-source toolbox for conducting inference on seismic data. We use OpenVINO while inference.
Python
6
star
11

nnpi-card

GPL bases sources for Intel NNP-I card
Makefile
5
star
12

azure-applications

A pre-configured Azure Data Science Virtual Machine with CPU-optimized Deep Learning Frameworks.
Shell
4
star
13

aws-sagemaker-marketplace

This repo contain example notebooks with instructions on using Intel AI Software listed in AWS SageMaker Marketplace.
Jupyter Notebook
2
star
14

aikit-operator

AIKit Operator used to install ImageStream in the cluster for jupyterhub notebooks within the Redhat OpenShift Environments
Makefile
2
star
15

nauta-zoo

This repository contains pack's templates used by the Nauta (https://github.com/IntelAI/nauta) system.
Python
2
star
16

forking-tuner

A forking tuner for the TensorFlow threading configuration.
Python
2
star