• This repository has been archived on 15/Nov/2022
  • Stars
    star
    251
  • Rank 156,395 (Top 4 %)
  • Language
    Jupyter Notebook
  • License
    BSD 3-Clause "New...
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[Prototype] Tools for the concurrent manipulation of variably sized Tensors.

BIG UPDATE: NestedTensor in core!

March 15 2022

As of recently we landed a minimal version of NestedTensor in core PyTorch! Operator coverage and migration of features is possible, but must be backed by issues (feature requests). If you have demand for specific NestedTensor operators, please open a feature request on pytorch/pytorch. For a more impactful submission please include your motivation, use case and list of operators.





The nestedtensor package prototype

If you are here because you ran into a runtime error due to a missing feature or some kind of bug, please open an issue and fill in the appropiate template. If you have general feedback about this prototype you can use our suggested template or just open a free-form issue if you like. Thank you for contributing to this project!

Tutorials

If you are new to this project, we recommend you take a look at our whirlwind introduction to get started.

Autograd support

Due to missing extensibility features of PyTorch nestedtensor currently lacks autograd support. We're actively working on this and recognize that it severely limits the applicability of the project. Please run nestedtensor operations within the inference mode context to prevent any adverse interactions with the autograd system.

For example

sentences = [torch.randn(10, 5), torch.randn(5, 5), torch.randn(9, 5)]
with torch.inference_mode():    
    nt = nestedtensor.nested_tensor(sentences)
    nt.sum(1)

Binaries

Due to the development velocity of PyTorch the nestedtensor project is built on top of and dependent on a fixed, recent PyTorch nightly.

Version Python CUDA Wheels
0.1.1 3.6 CPU-only nestedtensor
0.1.1 3.7 CPU-only nestedtensor
0.1.1 3.8 CPU-only nestedtensor
0.1.1 3.6 CUDA 10.2 nestedtensor
0.1.1 3.7 CUDA 10.2 nestedtensor
0.1.1 3.8 CUDA 10.2 nestedtensor

When installing a binary please specify the corresponding torch nightly link archive to automatically pull in the correct PyTorch nightly.

CPU

pip install https://download.pytorch.org/nestedtensor/whl/nightly/cpu/py3.7/nestedtensor-0.1.1_cpu-cp37-cp37m-linux_x86_64.whl -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html

CUDA 10.2

pip install https://download.pytorch.org/nestedtensor/whl/nightly/cu102/py3.7/nestedtensor-0.1.1_cu102-cp37-cp37m-linux_x86_64.whl -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html

Why consider using this? / Dealing with dynamic shapes

In general we batch data for efficiency, but usually batched kernels need, or greatly benefit from, regular, statically-shaped data.

One way of dealing with dynamic shapes then, is via padding and masking. Various projects construct masks that, together with a data Tensor, are used as a representation for lists of dynamically shaped Tensors.

Obviously this is inefficient from a memory and compute perspective if the Tensors within this list are sufficiently diverse.

You can also trace through the codebase where these masks are used and observe the kind of code this approach often leads to. See for example universal_sentence_embedding.

Otherwise we also have one-off operator support in PyTorch that aims to support dynamic shapes via extra arguments such as a padding index. Of course, while these functions are fast and sometimes memory efficient, they don't provide a consistent interface.

Other users simply gave up and started writing for-loops, or discovered that batching didn't help.

We want to have a single abstraction that is consistent, fast, memory efficient and readable and the nestedtensor project aims to provide that.

How does nestedtensor help here?

NestedTensors are a generalization of torch Tensors which eases working with data of different shapes and lengths. In a nutshell, Tensors have scalar entries (e.g. floats) and NestedTensors have Tensor entries. However, note that a NestedTensor is still a Tensor. That means it needs to have a single dimension, single dtype, single device and single layout.

Tensor entry constraints:

  • Each Tensor constituent is of the dtype, layout and device of the containing NestedTensor.
  • The dimension of a constituent Tensor must be less than the dimension of the NestedTensor.
  • An empty NestedTensor is of dimension zero.

Prototype classification

The nestedtensor package is a prototype intended for early stage feedback and testing. It is on the road to a beta classification, but there is no definitive timeline yet. See PyTorch feature classification for what prototype, beta and stale means.

Dependencies

  • pytorch (installed from nestedtensor/third_party/pytorch submodule)
  • torchvision (needed for examples and tests)
  • ipython (needed for examples)
  • notebook (needed for examples)

Contribution

The project is under active development. If you have a suggestions or found a bug, please file an issue!

More Repositories

1

pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
Python
78,312
star
2

examples

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
Python
21,700
star
3

vision

Datasets, Transforms and Models specific to Computer Vision
Python
15,495
star
4

tutorials

PyTorch tutorials.
Jupyter Notebook
7,713
star
5

captum

Model interpretability and understanding for PyTorch
Python
4,482
star
6

ignite

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
Python
4,443
star
7

serve

Serve, optimize and scale PyTorch models in production
Java
3,969
star
8

text

Models, data loaders and abstractions for language processing, powered by PyTorch
Python
3,426
star
9

ELF

ELF: a platform for game research with AlphaGoZero/AlphaZero reimplementation
C++
3,340
star
10

glow

Compiler for Neural Network hardware accelerators
C++
3,116
star
11

torchtune

A Native-PyTorch Library for LLM Fine-tuning
Python
2,946
star
12

botorch

Bayesian optimization in PyTorch
Jupyter Notebook
2,920
star
13

audio

Data manipulation and transformation for audio signal processing, powered by PyTorch
Python
2,355
star
14

TensorRT

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
Python
2,340
star
15

xla

Enabling PyTorch on XLA Devices (e.g. Google TPU)
C++
2,301
star
16

rl

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Python
1,768
star
17

torchrec

Pytorch domain library for recommendation systems
Python
1,683
star
18

tnt

A lightweight library for PyTorch training tools and utilities
Python
1,606
star
19

opacus

Training PyTorch models with differential privacy
Jupyter Notebook
1,582
star
20

QNNPACK

Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators
C
1,506
star
21

android-demo-app

PyTorch android examples of usage in applications
Java
1,392
star
22

functorch

functorch is JAX-like composable function transforms for PyTorch.
Jupyter Notebook
1,363
star
23

hub

Submission to https://pytorch.org/hub/
Python
1,360
star
24

data

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
Python
1,059
star
25

FBGEMM

FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
C++
1,050
star
26

torchdynamo

A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
Python
945
star
27

extension-cpp

C++ extensions in PyTorch
Python
924
star
28

cpuinfo

CPU INFOrmation library (x86/x86-64/ARM/ARM64, Linux/Windows/Android/macOS/iOS)
C
913
star
29

executorch

On-device AI across mobile, embedded and edge for PyTorch
C++
891
star
30

translate

Translate - a PyTorch Language Library
Python
811
star
31

benchmark

TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
Python
759
star
32

elastic

PyTorch elastic training
Python
725
star
33

torcharrow

High performance model preprocessing library on PyTorch
Python
625
star
34

ios-demo-app

PyTorch iOS examples
Swift
578
star
35

kineto

A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.
HTML
578
star
36

tensordict

TensorDict is a pytorch dedicated tensor container.
Python
577
star
37

PiPPy

Pipeline Parallelism for PyTorch
Python
538
star
38

tvm

TVM integration into PyTorch
C++
450
star
39

contrib

Implementations of ideas from recent papers
Python
388
star
40

ort

Accelerate PyTorch models with ONNX Runtime
Python
346
star
41

builder

Continuous builder and binary build scripts for pytorch
Shell
319
star
42

accimage

high performance image loading and augmenting routines mimicking PIL.Image interface
C
318
star
43

torchx

TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and support for E2E production ML pipelines when you're ready.
Python
284
star
44

extension-ffi

Examples of C extensions for PyTorch
Python
254
star
45

tensorpipe

A tensor-aware point-to-point communication primitive for machine learning
C++
237
star
46

pytorch.github.io

The website for PyTorch
HTML
211
star
47

hydra-torch

Configuration classes enabling type-safe PyTorch configuration for Hydra apps
Python
197
star
48

cppdocs

PyTorch C++ API Documentation
HTML
186
star
49

torcheval

A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to facilitate metric computation in distributed training and tools for PyTorch model evaluations.
Python
177
star
50

workshops

This is a repository for all workshop related materials.
Jupyter Notebook
172
star
51

multipy

torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters in a single C++ process.
C++
164
star
52

torchsnapshot

A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind.
Python
125
star
53

java-demo

Jupyter Notebook
119
star
54

rfcs

PyTorch RFCs (experimental)
110
star
55

torchdistx

Torch Distributed Experimental
Python
109
star
56

extension-script

Example repository for custom C++/CUDA operators for TorchScript
Python
109
star
57

csprng

Cryptographically secure pseudorandom number generators for PyTorch
Batchfile
97
star
58

pytorch_sphinx_theme

PyTorch Sphinx Theme
CSS
91
star
59

test-infra

This repository hosts code that supports the testing infrastructure for the main PyTorch repo. For example, this repo hosts the logic to track disabled tests and slow tests, as well as our continuation integration jobs HUD/dashboard.
TypeScript
61
star
60

maskedtensor

MaskedTensors for PyTorch
Python
38
star
61

add-annotations-github-action

A GitHub action to run clang-tidy and annotate failures
JavaScript
13
star
62

probot

PyTorch GitHub bot written in probot
TypeScript
11
star
63

ci-hud

HUD for CI activity on `pytorch/pytorch`, provides a top level view for jobs to easily discern regressions
JavaScript
10
star
64

ossci-job-dsl

Jenkins job definitions for OSSCI
Groovy
9
star
65

pytorch-integration-testing

Testing downstream libraries using pytorch release candidates
Makefile
5
star
66

torchhub_testing

Repo to test torchhub. Nothing to see here.
4
star
67

dr-ci

Diagnose and remediate CI jobs
Haskell
2
star
68

pytorch-ci-dockerfiles

Scripts for generating docker images for PyTorch CI
2
star
69

labeler-github-action

GitHub action for labeling issues and pull requests based on conditions
TypeScript
1
star