• Stars
    star
    547
  • Rank 77,500 (Top 2 %)
  • Language Cuda
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated 14 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Additional utils and helpers to extend TensorFlow when build recommendation systems, contributed and maintained by SIG Recommenders.

TensorFlow Recommenders Addons


TensorFlow Recommenders logo PyPI Status Badge PyPI - Python Version Documentation

TensorFlow Recommenders Addons(TFRA) are a collection of projects related to large-scale recommendation systems built upon TensorFlow by introducing the Dynamic Embedding Technology to TensorFlow that makes TensorFlow more suitable for training models of Search, Recommendations, and Advertising and makes building, evaluating, and serving sophisticated recommenders models easy. See approved TensorFlow RFC #313. Those contributions will be complementary to TensorFlow Core and TensorFlow Recommenders etc.

For Apple silicon(M1), please refer to Apple Silicon Support.

Main Features

  • Make key-value data structure (dynamic embedding) trainable in TensorFlow
  • Get better recommendation effect compared to static embedding mechanism with no hash conflicts
  • Compatible with all native TensorFlow optimizers and initializers
  • Compatible with native TensorFlow CheckPoint and SavedModel format
  • Fully support train and inference recommenders models on GPUs
  • Support TF serving and Triton Inference Server as inference framework
  • Support variant Key-Value implements as dynamic embedding storage and easy to extend
  • Support half synchronous training based on Horovod
    • Synchronous training for dense weights
    • Asynchronous training for sparse weights

Subpackages

Contributors

TensorFlow Recommenders-Addons depends on public contributions, bug fixes, and documentation. This project exists thanks to all the people and organizations who contribute. [Contribute]



A special thanks to NVIDIA Merlin Team and NVIDIA China DevTech Team, who have provided GPU acceleration technology support and code contribution.

Tutorials & Demos

See tutorials and demo for end-to-end examples of each subpackages.

Installation

Stable Builds

TensorFlow Recommenders-Addons is available on PyPI for Linux, macOS. To install the latest version, run the following:

pip install tensorflow-recommenders-addons

By default, CPU version will be installed. To install GPU version, run the following:

pip install tensorflow-recommenders-addons-gpu

To use TensorFlow Recommenders-Addons:

import tensorflow as tf
import tensorflow_recommenders_addons as tfra

Compatibility with Tensorflow

TensorFlow C++ APIs are not stable and thus we can only guarantee compatibility with the version TensorFlow Recommenders-Addons(TFRA) was built against. It is possible TFRA will work with multiple versions of TensorFlow, but there is also a chance for segmentation faults or other problematic crashes. Warnings will be emitted if your TensorFlow version does not match what it was built against.

Additionally, TFRA custom ops registration does not have a stable ABI interface so it is required that users have a compatible installation of TensorFlow even if the versions match what we had built against. A simplification of this is that TensorFlow Recommenders-Addons custom ops will work with pip-installed TensorFlow but will have issues when TensorFlow is compiled differently. A typical example of this would be conda-installed TensorFlow. RFC #133 aims to fix this.

Compatibility Matrix

GPU is supported by version 0.2.0 and later.

TFRA TensorFlow Compiler CUDA CUDNN Compute Capability CPU
0.6.0 2.8.3 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.6.0 2.6.0 Xcode 13.1 - - - Apple M1
0.5.1 2.8.3 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.5.1 2.6.0 Xcode 13.1 - - - Apple M1
0.5.0 2.8.3 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.5.0 2.6.0 Xcode 13.1 - - - Apple M1
0.4.0 2.5.1 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.4.0 2.5.0 Xcode 13.1 - - - Apple M1
0.3.1 2.5.1 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.2.0 2.4.1 GCC 7.3.1 11.0 8.0 6.0, 6.1, 7.0, 7.5, 8.0 x86
0.2.0 1.15.2 GCC 7.3.1 10.0 7.6 6.0, 6.1, 7.0, 7.5 x86
0.1.0 2.4.1 GCC 7.3.1 - - - x86

Check nvidia-support-matrix for more details.

NOTICE

  • The release packages have a strict version binding relationship with TensorFlow.
  • Due to the significant changes in the Tensorflow API, we can only ensure version 0.2.0 compatibility with TF1.15.2 on CPU & GPU, but there are no official releases, you can only get it through compiling by the following:
PY_VERSION="3.7" \
TF_VERSION="1.15.2" \
TF_NEED_CUDA=1 \
sh .github/workflows/make_wheel_Linux_x86.sh

# .whl file will be created in ./wheelhouse/
  • If you need to work with TensorFlow 1.14.x or older version, we suggest you give up, but maybe this doc can help you : Extract headers from TensorFlow compiling directory. At the same time, we find some OPs used by TRFA have better performance, so we highly recommend you update TensorFlow to 2.x.

Installing from Source

For all developers, we recommend you use the development docker containers which are all GPU enabled:

docker pull tfra/dev_container:latest-python3.8  # "3.7", "3.9" are all avaliable.
docker run --privileged --gpus all -it --rm -v $(pwd):$(pwd) tfra/dev_container:latest-3.8

CPU Only

You can also install from source. This requires the Bazel build system (version == 5.1.1). Please install a TensorFlow on your compiling machine, The compiler needs to know the version of Tensorflow and its headers according to the installed TensorFlow.

export TF_VERSION="2.8.3"  # "2.6.3" are well tested.
pip install tensorflow[-gpu]==$TF_VERSION

git clone https://github.com/tensorflow/recommenders-addons.git
cd recommenders-addons

# This script links project with TensorFlow dependency
python configure.py

bazel build --enable_runfiles build_pip_pkg
bazel-bin/build_pip_pkg artifacts

pip install artifacts/tensorflow_recommenders_addons-*.whl

GPU Support

Only TF_NEED_CUDA=1 is required and other environment variables are optional:

export TF_VERSION="2.8.3"  # "2.6.3" is well tested.
export PY_VERSION="3.8" 
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=11.2
export TF_CUDNN_VERSION=8.1
export CUDA_TOOLKIT_PATH="/usr/local/cuda"
export CUDNN_INSTALL_PATH="/usr/lib/x86_64-linux-gnu"

python configure.py

And then build the pip package and install:

bazel build --enable_runfiles build_pip_pkg
bazel-bin/build_pip_pkg artifacts
pip install artifacts/tensorflow_recommenders_addons_gpu-*.whl

Apple Silicon Support

Requirements:

  • macOS 12.0.0+
  • Python 3.8 or 3.9
  • tensorflow-macos 2.6.0
  • bazel 4.1.0+

The natively supported TensorFlow is maintained by Apple. Please see the instruction Get started with tensorflow-metal to install the Tensorflow on apple silicon devices.

# Install TensorFlow macOS dependencies
conda install -c apple tensorflow-deps==2.6.0

# Install base TensorFlow
python -m pip install tensorflow-macos==2.6.0

If you see any issue with installing tensorflow-macos, please contact the Apple Developer Forums: tensorflow-metal for help.

Install TFRA on Apple Silicon via PIP

python -m pip install tensorflow-recommenders-addons --no-deps

Install TFRA on Apple Silicon from Source

export TF_VERSION="2.6.0"  # Specify your Tensorflow version here, 2.8.0 is well tested.
export PY_VERSION="3.8"    # Specify your python version here, "3.9" is well tested.

# Building TFRA wheel
PY_VERSION=$PY_VERSION TF_VERSION=$TF_VERSION TF_NEED_CUDA="0" sh .github/workflows/make_wheel_macOS_arm64.sh

# Install the wheel
python -m pip install --no-deps ./artifacts/*.whl

Known Issues:

The Apple silicon version of TFRA doesn't support:

  • Data type float16
  • Synchronous training based on Horovod
  • save_to_file_system
  • load_from_file_system
  • warm_start_util

save_to_file_system and load_from_file_system are not supported because TFIO is not supported on apple silicon devices. Horovod and warm_start_util are not supported because the natively supported tensorflow-macos doesn't support V1 Tensorflow networks.

These issues may be fixed in the future release.

Data Type Matrix for tfra.dynamic_embedding.Variable
Values \ Keys int64 int32 string
float CPU, GPU CPU, GPU CPU
half CPU, GPU - CPU
int32 CPU, GPU CPU CPU
int8 CPU, GPU - CPU
int64 CPU - CPU
double CPU, CPU CPU CPU
bool - - CPU
string CPU - -
To use GPU by tfra.dynamic_embedding.Variable

The tfra.dynamic_embedding.Variable will ignore the device placement mechanism of TensorFlow, you should specify the devices onto GPUs explicitly for it.

import tensorflow as tf
import tensorflow_recommenders_addons as tfra

de = tfra.dynamic_embedding.get_variable("VariableOnGpu",
                                         devices=["/job:ps/task:0/GPU:0", ],
                                         # ...
                                         )

Usage restrictions on GPU

  • Only work on Nvidia GPU with cuda compute capability 6.0 or higher.
  • Considering the size of the .whl file, currently dim only supports less than or equal to 200, if you need longer dim, please submit an issue.
  • Only dynamic_embedding APIs and relative OPs support running on GPU.
  • For GPU HashTables manage GPU memory independently, TensorFlow should be configured to allow GPU memory growth by the following:
sess_config.gpu_options.allow_growth = True

Inference

With TensorFlow Serving

Compatibility Matrix

TFRA TensorFlow Serving branch Compiler CUDA CUDNN Compute Capability
0.6.0 2.8.3 r2.8 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.5.1 2.8.3 r2.8 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.5.0 2.8.3 r2.8 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.4.0 2.5.1 r2.5 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.3.1 2.5.1 r2.5 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.2.0 2.4.1 r2.4 GCC 7.3.1 11.0 8.0 6.0, 6.1, 7.0, 7.5, 8.0
0.2.0 1.15.2 r1.15 GCC 7.3.1 10.0 7.6 6.0, 6.1, 7.0, 7.5
0.1.0 2.4.1 r2.4 GCC 7.3.1 - - -

Serving TFRA-enable models by custom ops in TensorFlow Serving.

## If enable GPU OPs
export SERVING_WITH_GPU=1 

## Specifiy the branch of TFRA
export TFRA_BRANCH="master" # The `master` and `r0.6` are available.

## Create workspace, modify the directory as you prefer to.
export TFRA_SERVING_WORKSPACE=~/tfra_serving_workspace/
mkdir -p $TFRA_SERVING_WORKSPACE && cd $TFRA_SERVING_WORKSPACE

## Clone the release branches of serving and TFRA according to `Compatibility Matrix`.
git clone -b r2.8 https://github.com/tensorflow/serving.git
git clone -b $TFRA_BRANCH https://github.com/tensorflow/recommenders-addons.git

## Run config shell script
cd $TFRA_SERVING_WORKSPACE/recommenders-addons/tools
bash config_tfserving.sh $TFRA_BRANCH $TFRA_SERVING_WORKSPACE/serving $SERVING_WITH_GPU

## Build serving with TFRA OPs.
cd $TFRA_SERVING_WORKSPACE/serving
./tools/run_in_docker.sh bazel build tensorflow_serving/model_servers:tensorflow_model_server

For more detail, please refer to the shell script ./tools/config_tfserving.sh.

NOTICE

With Triton

When building the custom operations shared library it is important to use the same version of TensorFlow as is being used in Triton. You can find the TensorFlow version in the Triton Release Notes. A simple way to ensure you are using the correct version of TensorFlow is to use the NGC TensorFlow container corresponding to the Triton container. For example, if you are using the 23.05 version of Triton, use the 23.05 version of the TensorFlow container.

docker pull nvcr.io/nvidia/tritonserver:22.05-py3

export TFRA_BRANCH="master"
git clone -b $TFRA_BRANCH https://github.com/tensorflow/recommenders-addons.git
cd recommenders-addons

python configure.py
bazel build //tensorflow_recommenders_addons/dynamic_embedding/core:_cuckoo_hashtable_ops.so ##bazel 5.1.1 is well tested
mkdir /tmp/so
#you can also use the so file from pip install package file from "(PYTHONPATH)/site-packages/tensorflow_recommenders_addons/dynamic_embedding/core/_cuckoo_hashtable_ops.so"
cp bazel-bin/tensorflow_recommenders_addons/dynamic_embedding/core/_cuckoo_hashtable_ops.so /tmp/so

#tfra saved_model directory "/models/model_repository"
docker run --net=host -v /models/model_repository:/models nvcr.io/nvidia/tritonserver:22.05-py3 bash -c \
  "LD_PRELOAD=/tmp/so/_cuckoo_hashtable_ops.so:${LD_PRELOAD} tritonserver --model-repository=/models/ --backend-config=tensorflow,version=2 --strict-model-config=false"

NOTICE

  • The above LD_LIBRARY_PATH and backend-config must be set Because the default backend is tf1.

Community

Acknowledgment

We are very grateful to the maintainers of tensorflow/addons for borrowing a lot of code from tensorflow/addons to build our workflow and documentation system. We also want to extend a thank you to the Google team members who have helped with CI setup and reviews!

License

Apache License 2.0

More Repositories

1

tensorflow

An Open Source Machine Learning Framework for Everyone
C++
181,486
star
2

models

Models and examples built with TensorFlow
Python
76,523
star
3

tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
TypeScript
18,026
star
4

tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Python
14,693
star
5

tfjs-models

Pretrained models for TensorFlow.js
TypeScript
13,592
star
6

playground

Play with neural networks!
TypeScript
11,585
star
7

tfjs-core

WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.
TypeScript
8,493
star
8

examples

TensorFlow examples
Jupyter Notebook
7,681
star
9

tensorboard

TensorFlow's Visualization Toolkit
TypeScript
6,500
star
10

tfjs-examples

Examples built with TensorFlow.js
JavaScript
6,397
star
11

nmt

TensorFlow Neural Machine Translation Tutorial
Python
6,315
star
12

swift

Swift for TensorFlow
Jupyter Notebook
6,115
star
13

serving

A flexible, high-performance serving system for machine learning models
C++
6,068
star
14

docs

TensorFlow documentation
Jupyter Notebook
5,997
star
15

tpu

Reference models and tools for Cloud TPUs.
Jupyter Notebook
5,177
star
16

rust

Rust language bindings for TensorFlow
Rust
4,939
star
17

lucid

A collection of infrastructure and tools for research in neural network interpretability.
Jupyter Notebook
4,611
star
18

datasets

TFDS is a collection of datasets ready to use with TensorFlow, Jax, ...
Python
4,143
star
19

probability

Probabilistic reasoning and statistical analysis in TensorFlow
Jupyter Notebook
4,053
star
20

adanet

Fast and flexible AutoML with learning guarantees.
Jupyter Notebook
3,474
star
21

hub

A library for transfer learning by reusing parts of TensorFlow models.
Python
3,431
star
22

minigo

An open-source implementation of the AlphaGoZero algorithm
C++
3,428
star
23

skflow

Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning
Python
3,185
star
24

lingvo

Lingvo
Python
2,777
star
25

graphics

TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
Python
2,738
star
26

ranking

Learning to Rank in TensorFlow
Python
2,709
star
27

agents

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Python
2,706
star
28

federated

A framework for implementing federated learning
Python
2,256
star
29

tfx

TFX is an end-to-end platform for deploying production ML pipelines
Python
2,065
star
30

privacy

Library for training machine learning models with privacy for training data
Python
1,857
star
31

fold

Deep learning with dynamic computation graphs in TensorFlow
Python
1,825
star
32

recommenders

TensorFlow Recommenders is a library for building recommender system models using TensorFlow.
Python
1,739
star
33

quantum

Hybrid Quantum-Classical Machine Learning in TensorFlow
Python
1,723
star
34

mlir

"Multi-Level Intermediate Representation" Compiler Infrastructure
1,720
star
35

addons

Useful extra functionality for TensorFlow 2.x maintained by SIG-addons
Python
1,677
star
36

tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
C++
1,575
star
37

haskell

Haskell bindings for TensorFlow
Haskell
1,558
star
38

mesh

Mesh TensorFlow: Model Parallelism Made Easier
Python
1,540
star
39

workshops

A few exercises for use at events.
Jupyter Notebook
1,457
star
40

model-optimization

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
Python
1,454
star
41

ecosystem

Integration of TensorFlow with other open-source frameworks
Scala
1,362
star
42

gnn

TensorFlow GNN is a library to build Graph Neural Networks on the TensorFlow platform.
Python
1,246
star
43

community

Stores documents used by the TensorFlow developer community
C++
1,239
star
44

model-analysis

Model analysis tools for TensorFlow
Python
1,234
star
45

text

Making text a first-class citizen in TensorFlow.
C++
1,190
star
46

benchmarks

A benchmark framework for Tensorflow
Python
1,130
star
47

tfjs-node

TensorFlow powered JavaScript library for training and deploying ML models on Node.js.
TypeScript
1,048
star
48

similarity

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.
Python
992
star
49

transform

Input pipeline framework
Python
982
star
50

neural-structured-learning

Training neural models with structured signals.
Python
976
star
51

gan

Tooling for GANs in TensorFlow
Jupyter Notebook
907
star
52

compression

Data compression in TensorFlow
Python
806
star
53

swift-apis

Swift for TensorFlow Deep Learning Library
Swift
794
star
54

deepmath

Experiments towards neural network theorem proving
C++
779
star
55

data-validation

Library for exploring and validating machine learning data
Python
748
star
56

runtime

A performant and modular runtime for TensorFlow
C++
744
star
57

java

Java bindings for TensorFlow
Java
730
star
58

tensorrt

TensorFlow/TensorRT integration
Jupyter Notebook
723
star
59

tfjs-converter

Convert TensorFlow SavedModel and Keras models to TensorFlow.js
TypeScript
696
star
60

io

Dataset, streaming, and file system extensions maintained by TensorFlow SIG-IO
C++
686
star
61

docs-l10n

Translations of TensorFlow documentation
Jupyter Notebook
684
star
62

swift-models

Models and examples built with Swift for TensorFlow
Jupyter Notebook
644
star
63

decision-forests

A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Python
643
star
64

tcav

Code for the TCAV ML interpretability project
Jupyter Notebook
612
star
65

tfjs-wechat

WeChat Mini-program plugin for TensorFlow.js
TypeScript
524
star
66

lattice

Lattice methods in TensorFlow
Python
519
star
67

model-card-toolkit

A toolkit that streamlines and automates the generation of model cards
Python
400
star
68

flutter-tflite

Dart
377
star
69

custom-op

Guide for building custom op for TensorFlow
Smarty
370
star
70

cloud

The TensorFlow Cloud repository provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
Python
364
star
71

mlir-hlo

MLIR
361
star
72

tfjs-vis

A set of utilities for in browser visualization with TensorFlow.js
TypeScript
360
star
73

tflite-support

TFLite Support is a toolkit that helps users to develop ML and deploy TFLite models onto mobile / ioT devices.
C++
350
star
74

profiler

A profiling and performance analysis tool for TensorFlow
TypeScript
340
star
75

fairness-indicators

Tensorflow's Fairness Evaluation and Visualization Toolkit
Jupyter Notebook
330
star
76

moonlight

Optical music recognition in TensorFlow
Python
325
star
77

tfjs-tsne

TypeScript
309
star
78

estimator

TensorFlow Estimator
Python
295
star
79

embedding-projector-standalone

HTML
284
star
80

tfjs-layers

TensorFlow.js high-level layers API
TypeScript
283
star
81

build

Build-related tools for TensorFlow
Shell
248
star
82

kfac

An implementation of KFAC for TensorFlow
Python
195
star
83

tflite-micro-arduino-examples

C++
171
star
84

ngraph-bridge

TensorFlow-nGraph bridge
C++
138
star
85

profiler-ui

[Deprecated] The TensorFlow Profiler (TFProf) UI provides a visual interface for profiling TensorFlow models.
HTML
134
star
86

tensorboard-plugin-example

Python
134
star
87

tfx-addons

Developers helping developers. TFX-Addons is a collection of community projects to build new components, examples, libraries, and tools for TFX. The projects are organized under the auspices of the special interest group, SIG TFX-Addons. Join the group at http://goo.gle/tfx-addons-group
Jupyter Notebook
121
star
88

metadata

Utilities for passing TensorFlow-related metadata between tools
Python
102
star
89

networking

Enhanced networking support for TensorFlow. Maintained by SIG-networking.
C++
97
star
90

tfhub.dev

Python
71
star
91

tfjs-website

WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.
CSS
69
star
92

java-models

Models in Java
Java
68
star
93

java-ndarray

Java
66
star
94

tfjs-data

Simple APIs to load and prepare data for use in machine learning models
TypeScript
66
star
95

tfx-bsl

Common code for TFX
Python
61
star
96

autograph

Python
50
star
97

model-remediation

Model Remediation is a library that provides solutions for machine learning practitioners working to create and train models in a way that reduces or eliminates user harm resulting from underlying performance biases.
Python
42
star
98

codelabs

Jupyter Notebook
36
star
99

tensorstore

C++
25
star
100

swift-bindings

Swift
25
star