• Stars
    star
    370
  • Rank 111,001 (Top 3 %)
  • Language Smarty
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Guide for building custom op for TensorFlow

TensorFlow Custom Op

This is a guide for users who want to write custom c++ op for TensorFlow and distribute the op as a pip package. This repository serves as both a working example of the op building and packaging process, as well as a template/starting point for writing your own ops. The way this repository is set up allow you to build your custom ops from TensorFlow's pip package instead of building TensorFlow from scratch. This guarantee that the shared library you build will be binary compatible with TensorFlow's pip packages.

This guide currently supports Ubuntu and Windows custom ops, and it includes examples for both cpu and gpu ops.

Starting from Aug 1, 2019, nightly previews tf-nightly and tf-nightly-gpu, as well as official releases tensorflow and tensorflow-gpu past version 1.14.0 are now built with a different environment (Ubuntu 16.04 compared to Ubuntu 14.04, for example) as part of our effort to make TensorFlow's pip pacakges manylinux2010 compatible. To help you building custom ops on linux, here we provide our toolchain in the format of a combination of a Docker image and bazel configurations. Please check the table below for the Docker image name needed to build your custom ops.

CPU custom op GPU custom op
TF nightly nightly-custom-op-ubuntu16 nightly-custom-op-gpu-ubuntu16
TF >= 2.3 2.3.0-custom-op-ubuntu16 2.3.0-custom-op-gpu-ubuntu16
TF 1.5, 2.0 custom-op-ubuntu16-cuda10.0 custom-op-gpu-ubuntu16
TF <= 1.4 custom-op-ubuntu14 custom-op-gpu-ubuntu14

Note: all above Docker images have prefix tensorflow/tensorflow:

The bazel configurations are included as part of this repository.

Build Example zero_out Op (CPU only)

If you want to try out the process of building a pip package for custom op, you can use the source code from this repository following the instructions below.

For Windows Users

You can skip this section if you are not building on Windows. If you are building custom ops for Windows platform, you will need similar setup as building TensorFlow from source mentioned here. Additionally, you can skip all the Docker steps from the instructions below. Otherwise, the bazel commands to build and test custom ops stay the same.

Setup Docker Container

You are going to build the op inside a Docker container. Pull the provided Docker image from TensorFlow's Docker hub and start a container.

Use the following command if the TensorFlow pip package you are building against is not yet manylinux2010 compatible:

  docker pull tensorflow/tensorflow:custom-op-ubuntu14
  docker run -it tensorflow/tensorflow:custom-op-ubuntu14 /bin/bash

And the following instead if it is manylinux2010 compatible:

  docker pull tensorflow/tensorflow:custom-op-ubuntu16
  docker run -it tensorflow/tensorflow:custom-op-ubuntu16 /bin/bash

Inside the Docker container, clone this repository. The code in this repository came from the Adding an op guide.

git clone https://github.com/tensorflow/custom-op.git
cd custom-op

Build PIP Package

You can build the pip package with either Bazel or make.

With bazel:

  ./configure.sh
  bazel build build_pip_pkg
  bazel-bin/build_pip_pkg artifacts

With Makefile:

  make zero_out_pip_pkg

Install and Test PIP Package

Once the pip package has been built, you can install it with,

pip3 install artifacts/*.whl

Then test out the pip package

cd ..
python3 -c "import tensorflow as tf;import tensorflow_zero_out;print(tensorflow_zero_out.zero_out([[1,2], [3,4]]))"

And you should see the op zeroed out all input elements except the first one:

[[1 0]
 [0 0]]

Create and Distribute Custom Ops

Now you are ready to write and distribute your own ops. The example in this repository has done the boiling plate work for setting up build systems and package files needed for creating a pip package. We recommend using this repository as a template.

Template Overview

First let's go through a quick overview of the folder structure of this template repository.

β”œβ”€β”€ gpu  # Set up crosstool and CUDA libraries for Nvidia GPU, only needed for GPU ops
β”‚   β”œβ”€β”€ crosstool/
β”‚   β”œβ”€β”€ cuda/
β”‚   β”œβ”€β”€ BUILD
β”‚   └── cuda_configure.bzl
|
β”œβ”€β”€ tensorflow_zero_out  # A CPU only op
β”‚   β”œβ”€β”€ cc
β”‚   β”‚   β”œβ”€β”€ kernels  # op kernel implementation
β”‚   β”‚   β”‚   └── zero_out_kernels.cc
β”‚   β”‚   └── ops  # op interface definition
β”‚   β”‚       └── zero_out_ops.cc
β”‚   β”œβ”€β”€ python
β”‚   β”‚   β”œβ”€β”€ ops
β”‚   β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”‚   β”œβ”€β”€ zero_out_ops.py   # Load and extend the ops in python
β”‚   β”‚   β”‚   └── zero_out_ops_test.py  # tests for ops
β”‚   β”‚   └── __init__.py
|   |
β”‚   β”œβ”€β”€ BUILD  # BUILD file for all op targets
β”‚   └── __init__.py  # top level __init__ file that imports the custom op
β”‚
β”œβ”€β”€ tensorflow_time_two  # A GPU op
β”‚   β”œβ”€β”€ cc
β”‚   β”‚   β”œβ”€β”€ kernels  # op kernel implementation
β”‚   β”‚   β”‚   |── time_two.h
β”‚   β”‚   β”‚   |── time_two_kernels.cc
β”‚   β”‚   β”‚   └── time_two_kernels.cu.cc  # GPU kernel
β”‚   β”‚   └── ops  # op interface definition
β”‚   β”‚       └── time_two_ops.cc
β”‚   β”œβ”€β”€ python
β”‚   β”‚   β”œβ”€β”€ ops
β”‚   β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”‚   β”œβ”€β”€ time_two_ops.py   # Load and extend the ops in python
β”‚   β”‚   β”‚   └── time_two_ops_test.py  # tests for ops
β”‚   β”‚   └── __init__.py
|   |
β”‚   β”œβ”€β”€ BUILD  # BUILD file for all op targets
β”‚   └── __init__.py  # top level __init__ file that imports the custom op
|
β”œβ”€β”€ tf  # Set up TensorFlow pip package as external dependency for Bazel
β”‚   β”œβ”€β”€ BUILD
β”‚   β”œβ”€β”€ BUILD.tpl
β”‚   └── tf_configure.bzl
|
β”œβ”€β”€ BUILD  # top level Bazel BUILD file that contains pip package build target
β”œβ”€β”€ build_pip_pkg.sh  # script to build pip package for Bazel and Makefile
β”œβ”€β”€ configure.sh  # script to install TensorFlow and setup action_env for Bazel
β”œβ”€β”€ LICENSE
β”œβ”€β”€ Makefile  # Makefile for building shared library and pip package
β”œβ”€β”€ setup.py  # file for creating pip package
β”œβ”€β”€ MANIFEST.in  # files for creating pip package
β”œβ”€β”€ README.md
└── WORKSPACE  # Used by Bazel to specify tensorflow pip package as an external dependency

The op implementation, including both c++ and python code, goes under tensorflow_zero_out dir for CPU only ops, or tensorflow_time_two dir for GPU ops. You will want to replace either directory with the corresponding content of your own ops. tf folder contains the code for setting up TensorFlow pip package as an external dependency for Bazel only. You shouldn't need to change the content of this folder. You also don't need this folder if you are using other build systems, such as Makefile. The gpu folder contains the code for setting up CUDA libraries and toolchain. You only need the gpu folder if you are writing a GPU op and using bazel. To build a pip package for your op, you will also need to update a few files at the top level of the template, for example, setup.py, MANIFEST.in and build_pip_pkg.sh.

Setup

First, clone this template repo.

git clone https://github.com/tensorflow/custom-op.git my_op
cd my_op

Docker

Next, set up a Docker container using the provided Docker image for building and testing the ops. We provide two sets of Docker images for different versions of pip packages. If the pip package you are building against was released before Aug 1, 2019 and has manylinux1 tag, please use Docker images tensorflow/tensorflow:custom-op-ubuntu14 and tensorflow/tensorflow:custom-op-gpu-ubuntu14, which are based on Ubuntu 14.04. Otherwise, for the newer manylinux2010 packages, please use Docker images tensorflow/tensorflow:custom-op-ubuntu16 and tensorflow/tensorflow:custom-op-gpu-ubuntu16 instead. All Docker images come with Bazel pre-installed, as well as the corresponding toolchain used for building the released TensorFlow pacakges. We have seen many cases where dependency version differences and ABI incompatibilities cause the custom op extension users build to not work properly with TensorFlow's released pip packages. Therefore, it is highly recommended to use the provided Docker image to build your custom op. To get the CPU Docker image, run one of the following command based on which pip package you are building against:

# For pip packages labeled manylinux1
docker pull tensorflow/tensorflow:custom-op-ubuntu14

# For manylinux2010
docker pull tensorflow/tensorflow:custom-op-ubuntu16

For GPU, run

# For pip packages labeled manylinux1
docker pull tensorflow/tensorflow:custom-op-gpu-ubuntu14

# For manylinux2010
docker pull tensorflow/tensorflow:custom-op-gpu-ubuntu16

You might want to use Docker volumes to map a work_dir from host to the container, so that you can edit files on the host, and build with the latest changes in the Docker container. To do so, run the following for CPU

# For pip packages labeled manylinux1
docker run -it -v ${PWD}:/working_dir -w /working_dir  tensorflow/tensorflow:custom-op-ubuntu14

# For manylinux2010
docker run -it -v ${PWD}:/working_dir -w /working_dir  tensorflow/tensorflow:custom-op-ubuntu16

For GPU, you want to use nvidia-docker:

# For pip packages labeled manylinux1
docker run --runtime=nvidia --privileged  -it -v ${PWD}:/working_dir -w /working_dir  tensorflow/tensorflow:custom-op-gpu-ubuntu14

# For manylinux2010
docker run --runtime=nvidia --privileged  -it -v ${PWD}:/working_dir -w /working_dir  tensorflow/tensorflow:custom-op-gpu-ubuntu16

Run configure.sh

Last step before starting implementing the ops, you want to set up the build environment. The custom ops will need to depend on TensorFlow headers and shared library libtensorflow_framework.so, which are distributed with TensorFlow official pip package. If you would like to use Bazel to build your ops, you might also want to set a few action_envs so that Bazel can find the installed TensorFlow. We provide a configure script that does these for you. Simply run ./configure.sh in the docker container and you are good to go.

Add Op Implementation

Now you are ready to implement your op. Following the instructions at Adding a New Op, add definition of your op interface under <your_op>/cc/ops/ and kernel implementation under <your_op>/cc/kernels/.

Build and Test CPU Op

Bazel

To build the custom op shared library with Bazel, follow the cc_binary example in tensorflow_zero_out/BUILD. You will need to depend on the header files and libtensorflow_framework.so from TensorFlow pip package to build your op. Earlier we mentioned that the template has already setup TensorFlow pip package as an external dependency in tf directory, and the pip package is listed as local_config_tf in WORKSPACE file. Your op can depend directly on TensorFlow header files and 'libtensorflow_framework.so' with the following:

    deps = [
        "@local_config_tf//:libtensorflow_framework",
        "@local_config_tf//:tf_header_lib",
    ],

You will need to keep both above dependencies for your op. To build the shared library with Bazel, run the following command in your Docker container

bazel build tensorflow_zero_out:python/ops/_zero_out_ops.so

Makefile

To build the custom op shared library with make, follow the example in Makefile for _zero_out_ops.so and run the following command in your Docker container:

make op

Extend and Test the Op in Python

Once you have built your custom op shared library, you can follow the example in tensorflow_zero_out/python/ops, and instructions here to create a module in Python for your op. Both guides use TensorFlow API tf.load_op_library, which loads the shared library and registers the ops with the TensorFlow framework.

from tensorflow.python.framework import load_library
from tensorflow.python.platform import resource_loader

_zero_out_ops = load_library.load_op_library(
    resource_loader.get_path_to_datafile('_zero_out_ops.so'))
zero_out = _zero_out_ops.zero_out

You can also add Python tests like what we have done in tensorflow_zero_out/python/ops/zero_out_ops_test.py to check that your op is working as intended.

Run Tests with Bazel

To add the python library and tests targets to Bazel, please follow the examples for py_library target tensorflow_zero_out:zero_out_ops_py and py_test target tensorflow_zero_out:zero_out_ops_py_test in tensorflow_zero_out/BUILD file. To run your test with bazel, do the following in Docker container,

bazel test tensorflow_zero_out:zero_out_ops_py_test
Run Tests with Make

To add the test target to make, please follow the example in Makefile. To run your python test, simply run the following in Docker container,

make test_zero_out

Build and Test GPU Op

Bazel

To build the custom GPU op shared library with Bazel, follow the cc_binary example in tensorflow_time_two/BUILD. Similar to CPU custom ops, you can directly depend on TensorFlow header files and 'libtensorflow_framework.so' with the following:

    deps = [
        "@local_config_tf//:libtensorflow_framework",
        "@local_config_tf//:tf_header_lib",
    ],

Additionally, when you ran configure inside the GPU container, config=cuda will be set for bazel command, which will also automatically include cuda shared library and cuda headers as part of the dependencies only for GPU version of the op: if_cuda_is_configured([":cuda", "@local_config_cuda//cuda:cuda_headers"]).

To build the shared library with Bazel, run the following command in your Docker container

bazel build tensorflow_time_two:python/ops/_time_two_ops.so

Makefile

To build the custom op shared library with make, follow the example in Makefile for _time_two_ops.so and run the following command in your Docker container:

make time_two_op

Extend and Test the Op in Python

Once you have built your custom op shared library, you can follow the example in tensorflow_time_two/python/ops, and instructions here to create a module in Python for your op. This part is the same as CPU custom op as shown above.

Run Tests with Bazel

Similar to CPU custom op, to run your test with bazel, do the following in Docker container,

bazel test tensorflow_time_two:time_two_ops_py_test
Run Tests with Make

To add the test target to make, please follow the example in Makefile. To run your python test, simply run the following in Docker container,

make time_two_test

Build PIP Package

Now your op works, you might want to build a pip package for it so the community can also benefit from your work. This template provides the basic setup needed to build your pip package. First, you will need to update the following top level files based on your op.

  • setup.py contains information about your package (such as the name and version) as well as which code files to include.
  • MANIFEST.in contains the list of additional files you want to include in the source distribution. Here you want to make sure the shared library for your custom op is included in the pip package.
  • build_pip_pkg.sh creates the package hierarchy, and calls bdist_wheel to assemble your pip package.

You can use either Bazel or Makefile to build the pip package.

Build with Bazel

You can find the target for pip package in the top level BUILD file. Inside the data list of this build_pip_pkg target, you want to include the python library target //tensorflow_zero_out:zero_out_py in addition to the top level files. To build the pip package builder, run the following command in Docker container,

bazel build :build_pip_pkg

The bazel build command creates a binary named build_pip_package, which you can use to build the pip package. For example, the following builds your .whl package in the artifacts directory:

bazel-bin/build_pip_pkg artifacts

Build with make

Building with make also invoke the same build_pip_pkg.sh script. You can run,

make pip_pkg

Test PIP Package

Before publishing your pip package, test your pip package.

pip3 install artifacts/*.whl
python3 -c "import tensorflow as tf;import tensorflow_zero_out;print(tensorflow_zero_out.zero_out([[1,2], [3,4]]))"

Publish PIP Package

Once your pip package has been thoroughly tested, you can distribute your package by uploading your package to the Python Package Index. Please follow the official instruction from Pypi.

FAQ

Here are some issues our users have ran into and possible solutions. Feel free to send us a PR to add more entries.

Issue How to?
Do I need both the toolchain and the docker image? Yes, you will need both to get the same setup we use to build TensorFlow's official pip package.
How do I also create a manylinux2010 binary? You can use auditwheel version 2.0.0 or newer.
What do I do if I get ValueError: Cannot repair wheel, because required library "libtensorflow_framework.so.1" could not be located or ValueError: Cannot repair wheel, because required library "libtensorflow_framework.so.2" could not be located with auditwheel? Please see this related issue.
What do I do if I get In file included from tensorflow_time_two/cc/kernels/time_two_kernels.cu.cc:21:0: /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/util/gpu_kernel_helper.h:22:10: fatal error: third_party/gpus/cuda/include/cuda_fp16.h: No such file or directory Copy the CUDA header files to target directory. mkdir -p /usr/local/lib/python3.6/dist-packages/tensorflow/include/third_party/gpus/cuda/include && cp -r /usr/local/cuda/targets/x86_64-linux/include/* /usr/local/lib/python3.6/dist-packages/tensorflow/include/third_party/gpus/cuda/include

More Repositories

1

tensorflow

An Open Source Machine Learning Framework for Everyone
C++
181,486
star
2

models

Models and examples built with TensorFlow
Python
76,554
star
3

tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
TypeScript
18,092
star
4

tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Python
14,693
star
5

tfjs-models

Pretrained models for TensorFlow.js
TypeScript
13,592
star
6

playground

Play with neural networks!
TypeScript
11,585
star
7

tfjs-core

WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.
TypeScript
8,491
star
8

examples

TensorFlow examples
Jupyter Notebook
7,681
star
9

tensorboard

TensorFlow's Visualization Toolkit
TypeScript
6,500
star
10

tfjs-examples

Examples built with TensorFlow.js
JavaScript
6,423
star
11

nmt

TensorFlow Neural Machine Translation Tutorial
Python
6,315
star
12

swift

Swift for TensorFlow
Jupyter Notebook
6,118
star
13

serving

A flexible, high-performance serving system for machine learning models
C++
6,068
star
14

docs

TensorFlow documentation
Jupyter Notebook
5,997
star
15

tpu

Reference models and tools for Cloud TPUs.
Jupyter Notebook
5,177
star
16

rust

Rust language bindings for TensorFlow
Rust
4,939
star
17

lucid

A collection of infrastructure and tools for research in neural network interpretability.
Jupyter Notebook
4,611
star
18

datasets

TFDS is a collection of datasets ready to use with TensorFlow, Jax, ...
Python
4,156
star
19

probability

Probabilistic reasoning and statistical analysis in TensorFlow
Jupyter Notebook
4,053
star
20

adanet

Fast and flexible AutoML with learning guarantees.
Jupyter Notebook
3,474
star
21

hub

A library for transfer learning by reusing parts of TensorFlow models.
Python
3,434
star
22

minigo

An open-source implementation of the AlphaGoZero algorithm
C++
3,428
star
23

skflow

Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning
Python
3,185
star
24

lingvo

Lingvo
Python
2,777
star
25

graphics

TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
Python
2,738
star
26

agents

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Python
2,717
star
27

ranking

Learning to Rank in TensorFlow
Python
2,713
star
28

federated

A framework for implementing federated learning
Python
2,271
star
29

tfx

TFX is an end-to-end platform for deploying production ML pipelines
Python
2,073
star
30

privacy

Library for training machine learning models with privacy for training data
Python
1,862
star
31

fold

Deep learning with dynamic computation graphs in TensorFlow
Python
1,825
star
32

recommenders

TensorFlow Recommenders is a library for building recommender system models using TensorFlow.
Python
1,739
star
33

quantum

Hybrid Quantum-Classical Machine Learning in TensorFlow
Python
1,723
star
34

mlir

"Multi-Level Intermediate Representation" Compiler Infrastructure
1,720
star
35

addons

Useful extra functionality for TensorFlow 2.x maintained by SIG-addons
Python
1,677
star
36

tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
C++
1,595
star
37

haskell

Haskell bindings for TensorFlow
Haskell
1,558
star
38

mesh

Mesh TensorFlow: Model Parallelism Made Easier
Python
1,540
star
39

model-optimization

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Python
1,459
star
40

workshops

A few exercises for use at events.
Jupyter Notebook
1,457
star
41

ecosystem

Integration of TensorFlow with other open-source frameworks
Scala
1,362
star
42

gnn

TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.
Python
1,260
star
43

community

Stores documents used by the TensorFlow developer community
C++
1,239
star
44

model-analysis

Model analysis tools for TensorFlow
Python
1,234
star
45

text

Making text a first-class citizen in TensorFlow.
C++
1,194
star
46

benchmarks

A benchmark framework for Tensorflow
Python
1,130
star
47

tfjs-node

TensorFlow powered JavaScript library for training and deploying ML models on Node.js.
TypeScript
1,048
star
48

similarity

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.
Python
994
star
49

transform

Input pipeline framework
Python
982
star
50

neural-structured-learning

Training neural models with structured signals.
Python
976
star
51

gan

Tooling for GANs in TensorFlow
Jupyter Notebook
907
star
52

compression

Data compression in TensorFlow
Python
806
star
53

swift-apis

Swift for TensorFlow Deep Learning Library
Swift
794
star
54

deepmath

Experiments towards neural network theorem proving
C++
779
star
55

data-validation

Library for exploring and validating machine learning data
Python
748
star
56

runtime

A performant and modular runtime for TensorFlow
C++
746
star
57

java

Java bindings for TensorFlow
Java
730
star
58

tensorrt

TensorFlow/TensorRT integration
Jupyter Notebook
723
star
59

tfjs-converter

Convert TensorFlow SavedModel and Keras models to TensorFlow.js
TypeScript
697
star
60

io

Dataset, streaming, and file system extensions maintained by TensorFlow SIG-IO
C++
686
star
61

docs-l10n

Translations of TensorFlow documentation
Jupyter Notebook
684
star
62

swift-models

Models and examples built with Swift for TensorFlow
Jupyter Notebook
644
star
63

decision-forests

A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Python
643
star
64

tcav

Code for the TCAV ML interpretability project
Jupyter Notebook
612
star
65

recommenders-addons

Additional utils and helpers to extend TensorFlow when build recommendation systems, contributed and maintained by SIG Recommenders.
Cuda
547
star
66

tfjs-wechat

WeChat Mini-program plugin for TensorFlow.js
TypeScript
524
star
67

lattice

Lattice methods in TensorFlow
Python
519
star
68

model-card-toolkit

A toolkit that streamlines and automates the generation of model cards
Python
400
star
69

flutter-tflite

Dart
385
star
70

cloud

The TensorFlow Cloud repository provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
Python
364
star
71

mlir-hlo

MLIR
361
star
72

tfjs-vis

A set of utilities for in browser visualization with TensorFlow.js
TypeScript
360
star
73

tflite-support

TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.
C++
354
star
74

profiler

A profiling and performance analysis tool for TensorFlow
TypeScript
344
star
75

fairness-indicators

Tensorflow's Fairness Evaluation and Visualization Toolkit
Jupyter Notebook
330
star
76

moonlight

Optical music recognition in TensorFlow
Python
325
star
77

tfjs-tsne

TypeScript
309
star
78

estimator

TensorFlow Estimator
Python
295
star
79

embedding-projector-standalone

HTML
284
star
80

tfjs-layers

TensorFlow.js high-level layers API
TypeScript
283
star
81

build

Build-related tools for TensorFlow
Shell
248
star
82

kfac

An implementation of KFAC for TensorFlow
Python
195
star
83

tflite-micro-arduino-examples

C++
171
star
84

ngraph-bridge

TensorFlow-nGraph bridge
C++
138
star
85

profiler-ui

[Deprecated] The TensorFlow Profiler (TFProf) UI provides a visual interface for profiling TensorFlow models.
HTML
134
star
86

tensorboard-plugin-example

Python
134
star
87

tfx-addons

Developers helping developers. TFX-Addons is a collection of community projects to build new components, examples, libraries, and tools for TFX. The projects are organized under the auspices of the special interest group, SIG TFX-Addons. Join the group at http://goo.gle/tfx-addons-group
Jupyter Notebook
121
star
88

metadata

Utilities for passing TensorFlow-related metadata between tools
Python
102
star
89

networking

Enhanced networking support for TensorFlow. Maintained by SIG-networking.
C++
97
star
90

tfhub.dev

Python
71
star
91

tfjs-website

WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.
CSS
69
star
92

java-models

Models in Java
Java
68
star
93

java-ndarray

Java
66
star
94

tfjs-data

Simple APIs to load and prepare data for use in machine learning models
TypeScript
66
star
95

tfx-bsl

Common code for TFX
Python
61
star
96

autograph

Python
50
star
97

model-remediation

Model Remediation is a library that provides solutions for machine learning practitioners working to create and train models in a way that reduces or eliminates user harm resulting from underlying performance biases.
Python
42
star
98

codelabs

Jupyter Notebook
36
star
99

tensorstore

C++
25
star
100

swift-bindings

Swift
25
star