• This repository has been archived on 04/Jan/2023
  • Stars
    star
    163
  • Rank 222,972 (Top 5 %)
  • Language
    C++
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

nGraph-HE: Deep learning with Homomorphic Encryption (HE) through Intel nGraph

DISCONTINUATION OF PROJECT

This project will no longer be maintained by Intel. Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. Intel no longer accepts patches to this project.

HE Transformer for nGraph

The Intel® HE transformer for nGraph™ is a Homomorphic Encryption (HE) backend to the Intel® nGraph Compiler, Intel's graph compiler for Artificial Neural Networks.

Homomorphic encryption is a form of encryption that allows computation on encrypted data, and is an attractive remedy to increasing concerns about data privacy in the field of machine learning. For more information, see our original paper. Our updated paper showcases many of the recent advances in he-transformer.

This project is meant as a proof-of-concept to demonstrate the feasibility of HE on local machines. The goal is to measure performance of various HE schemes for deep learning. This is not intended to be a production-ready product, but rather a research tool.

Currently, we support the CKKS encryption scheme, implemented by the Simple Encrypted Arithmetic Library (SEAL) from Microsoft Research.

To help compute non-polynomial activiations, we additionally integrate with the ABY multi-party computation library. See also the NDSS 2015 paper introducing ABY. For more details about our integration with ABY, please refer to our ARES 2020 paper.

We also integrate with the Intel® nGraph™ Compiler and runtime engine for TensorFlow to allow users to run inference on trained neural networks through Tensorflow.

Examples

The examples folder contains a deep learning example which depends on the Intel® nGraph™ Compiler and runtime engine for TensorFlow.

Building HE Transformer

Dependencies

  • Operating system: Ubuntu 16.04, Ubuntu 18.04.
  • CMake >= 3.12
  • Compiler: g++ version >= 6.0, clang >= 5.0 (with ABY g++ version >= 8.4)
  • OpenMP is strongly suggested, though not strictly necessary. You may experience slow runtimes without OpenMP
  • python3 and pip3
  • virtualenv v16.1.0
  • bazel v0.25.2

For a full list of dependencies, see the docker containers, which build he-transformer on a reference OS.

The following dependencies are built automatically

To install bazel

    wget https://github.com/bazelbuild/bazel/releases/download/0.25.2/bazel-0.25.2-installer-linux-x86_64.sh
    bash bazel-0.25.2-installer-linux-x86_64.sh --user

Add and source the bin path to your ~/.bashrc file to call bazel

 export PATH=$PATH:~/bin
 source ~/.bashrc

1. Build HE-Transformer

Before building, make sure you deactivate any active virtual environments (i.e. run deactivate)

git clone https://github.com/IntelAI/he-transformer.git
cd he-transformer
export HE_TRANSFORMER=$(pwd)
mkdir build
cd $HE_TRANSFORMER/build
cmake .. -DCMAKE_CXX_COMPILER=clang++-6.0

Note, you may need sudo permissions to install he_seal_backend to the default location. To set a custom installation prefix, add the -DCMAKE_INSTALL_PREFIX=~/my_install_prefix flag to the above cmake command.

See 1a and 1b for additional configuration options. To install, run the below command (note, this may take several hours. To speed up compilation with multiple threads, call make -j install)

make install

1a. Multi-party computation (MPC) with garbled circuits (GC)

To enable an integration with an experimental multi-party computation backend using garbled circuits via ABY, call

cmake .. -DNGRAPH_HE_ABY_ENABLE=ON

See MP2ML for details on the implementation.

We would like to thank the ENCRYPTO group from TU Darmstadt, particularly Hossein Yalame and Daniel Demmler, for helping with the ABY implementation.

Note: this feature is experimental, and may suffer from performance and memory issues. To use this feature, build python bindings for the client, and see 3. python examples.

1b. To build documentation

First install the additional required dependencies:

sudo apt-get install doxygen graphviz

Then add the following CMake flag

cd doc
cmake .. -DNGRAPH_HE_DOC_BUILD_ENABLE=ON

and call

make docs

to create doxygen documentation in $HE_TRANSFORMER/build/doc/doxygen.

1c. Python bindings for client

To build a client-server model with python bindings (recommended for running neural networks through TensorFlow):

cd $HE_TRANSFORMER/build
source external/venv-tf-py3/bin/activate
make install python_client

This will create python/dist/pyhe_client-*.whl. Install it using

pip install python/dist/pyhe_client-*.whl

To check the installation worked correctly, run

python3 -c "import pyhe_client"

This should run without errors.

2. Run C++ unit-tests

cd $HE_TRANSFORMER/build
# To run single HE_SEAL unit-test
./test/unit-test --gtest_filter="HE_SEAL.add_2_3_cipher_plain_real_unpacked_unpacked"
# To run all C++ unit-tests
./test/unit-test

3. Run python examples

See examples/README.md for examples of running he-transformer for deep learning inference on encrypted data.

Code formatting

Please run maint/apply-code-format.sh before submitting a pull request.

Publications describing the HE Transformer Implementation

  • Fabian Boemer, Yixing Lao, Rosario Cammarota, and Casimir Wierzynski. nGraph-HE: a graph compiler for deep learning on homomorphically encrypted data. In ACM International Conference on Computing Frontiers 2019. https://dl.acm.org/doi/10.1145/3310273.3323047
  • Fabian Boemer, Anamaria Costache, Rosario Cammarota, and Casimir Wierzynski. 2019. nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data. In WAHC’19. https://dl.acm.org/doi/pdf/10.1145/3338469.3358944
  • Fabian Boemer, Rosario Cammarota, Daniel Demmler, Thomas Schneider, and Hossein Yalame. 2020. MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference. In ARES’20. https://doi.org/10.1145/3407023.3407045

More Repositories

1

models

Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs
Python
652
star
2

nauta

A multi-user, distributed computing environment for running DL model training experiments on Intel® Xeon® Scalable processor-based systems
Python
392
star
3

unet

U-Net Biomedical Image Segmentation
Jupyter Notebook
289
star
4

cerl

Python
72
star
5

vck

Volume Controller for Kubernetes
Go
67
star
6

tools

Python
58
star
7

inference-model-manager

Inference Model Manager for Kubernetes
Python
46
star
8

intel-xai-tools

Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
HTML
34
star
9

experiments

Experiments API for Experiment Tracking on Kubernetes
Python
27
star
10

transfer-learning

Libraries and tools to support Transfer Learning
Python
18
star
11

nodus

Simulated large clusters for Kubernetes scheduler validation.
Go
15
star
12

openvino-demos

This repo contains information regarding cloud offerings of OpenVINOâ„¢ and demos to showcase OpenVINOâ„¢ via sample Jupyter notebooks.
Jupyter Notebook
10
star
13

openseismic

Open Seismic is an open-source toolbox for conducting inference on seismic data. We use OpenVINO while inference.
Python
6
star
14

nnpi-card

GPL bases sources for Intel NNP-I card
Makefile
5
star
15

azure-applications

A pre-configured Azure Data Science Virtual Machine with CPU-optimized Deep Learning Frameworks.
Shell
4
star
16

aws-sagemaker-marketplace

This repo contain example notebooks with instructions on using Intel AI Software listed in AWS SageMaker Marketplace.
Jupyter Notebook
2
star
17

aikit-operator

AIKit Operator used to install ImageStream in the cluster for jupyterhub notebooks within the Redhat OpenShift Environments
Makefile
2
star
18

nauta-zoo

This repository contains pack's templates used by the Nauta (https://github.com/IntelAI/nauta) system.
Python
2
star
19

forking-tuner

A forking tuner for the TensorFlow threading configuration.
Python
2
star