• Stars
    star
    181
  • Rank 212,110 (Top 5 %)
  • Language
    C++
  • License
    Apache License 2.0
  • Created over 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Intel®️ Homomorphic Encryption Acceleration Library accelerates modular arithmetic operations used in homomorphic encryption

Build and Test

Intel Homomorphic Encryption (HE) Acceleration Library

Intel®️ HE Acceleration Library is an open-source library which provides efficient implementations of integer arithmetic on Galois fields. Such arithmetic is prevalent in cryptography, particularly in homomorphic encryption (HE) schemes. Intel HE Acceleration Library targets integer arithmetic with word-sized primes, typically 30-60 bits. Intel HE Acceleration Library provides an API for 64-bit unsigned integers and targets Intel CPUs. For more details on Intel HE Acceleration Library, see our whitepaper. For tips on best performance, see Performance.

Contents

Introduction

Many cryptographic applications, particularly homomorphic encryption (HE), rely on integer polynomial arithmetic in a finite field. HE, which enables computation on encrypted data, typically uses polynomials with degree N a power of two roughly in the range N=[2^{10}, 2^{17}]. The coefficients of these polynomials are in a finite field with a word-sized prime, q, up to q~62 bits. More precisely, the polynomials live in the ring Z_q[X]/(X^N + 1). That is, when adding or multiplying two polynomials, each coefficient of the result is reduced by the prime modulus q. When multiplying two polynomials, the resulting polynomials of degree 2N is additionally reduced by taking the remainder when dividing by X^N+1.

The primary bottleneck in many HE applications is polynomial-polynomial multiplication in Z_q[X]/(X^N + 1). For efficient implementation, Intel HE Acceleration Library implements the negacyclic number-theoretic transform (NTT). To multiply two polynomials, q_1(x), q_2(x) using the NTT, we perform the FwdNTT on the two input polynomials, then perform an element-wise modular multiplication, and perform the InvNTT on the result.

Intel HE Acceleration Library implements the following functions:

  • The forward and inverse negacyclic number-theoretic transform (NTT)
  • Element-wise vector-vector modular multiplication
  • Element-wise vector-scalar modular multiplication with optional addition
  • Element-wise modular multiplication

For each function, the library implements one or several Intel(R) AVX-512 implementations, as well as a less performant, more readable native C++ implementation. Intel HE Acceleration Library will automatically choose the best implementation for the given CPU Intel(R) AVX-512 feature set. In particular, when the modulus q is less than 2^{50}, the AVX512IFMA instruction set available on Intel IceLake server and IceLake client will provide a more efficient implementation.

For additional functionality, see the public headers, located in include/hexl

Building Intel HE Acceleration Library

Intel HE Acceleration Library can be built in several ways. Intel HE Acceleration Library has been uploaded to the Microsoft vcpkg C++ package manager, which supports Linux, macOS, and Windows builds. See the vcpkg repository for instructions to build Intel HE Acceleration Library with vcpkg, e.g. run vcpkg install hexl. There may be some delay in uploading latest release ports to vcpkg. Intel HE Acceleration Library provides port files to build the latest version with vcpkg. For a static build, run vcpkg install hexl --overlay-ports=/path/to/hexl/port/hexl --head. For dynamic build, use the custom triplet file and run vcpkg install hexl:hexl-dynamic-build --overlay-ports=/path/to/hexl/port/hexl --head --overlay-triplets=/path/to/hexl/port/hexl. For detailed explanation, see instruction for building vcpkg port using overlays and use of custom triplet provided by vcpkg.

Intel HE Acceleration Library also supports a build using the CMake build system. See below for the instructions to build Intel HE Acceleration Library from source using CMake.

Dependencies

We have tested Intel HE Acceleration Library on the following operating systems:

  • Ubuntu 20.04
  • macOS 10.15 Catalina
  • Microsoft Windows 10

Intel HE Acceleration Library requires the following dependencies:

Dependency Version
CMake >= 3.13 *
Compiler gcc >= 7.0, clang++ >= 5.0, MSVC >= 2019

* For Windows 10, you must check whether the version on CMake you have can generate the necessary Visual Studio project files. For example, only from CMake 3.14 onwards can MSVC 2019 project files be generated.

Compile-time options

In addition to the standard CMake build options, Intel HE Acceleration Library supports several compile-time flags to configure the build. For convenience, they are listed below:

CMake option Values Default
HEXL_BENCHMARK ON / OFF ON Set to ON to enable benchmark suite via Google benchmark
HEXL_COVERAGE ON / OFF OFF Set to ON to enable coverage report of unit-tests
HEXL_SHARED_LIB ON / OFF OFF Set to ON to enable building shared library
HEXL_DOCS ON / OFF OFF Set to ON to enable building of documentation
HEXL_TESTING ON / OFF ON Set to ON to enable building of unit-tests
HEXL_TREAT_WARNING_AS_ERROR ON / OFF OFF Set to ON to treat all warnings as error

Compiling Intel HE Acceleration Library

To compile Intel HE Acceleration Library from source code, first clone the repository and change directories to where the source has been cloned.

Linux and Mac

The instructions to build Intel HE Acceleration Library are common to Linux and MacOS.

Then, to configure the build, call

cmake -S . -B build

adding the desired compile-time options with a -D flag. For instance, to use a non-standard installation directory, configure the build with

cmake -S . -B build -DCMAKE_INSTALL_PREFIX=/path/to/install

Or, to build Intel HE Acceleration Library as a shared library, call

cmake -S . -B build -DHEXL_SHARED_LIB=ON

Then, to build Intel HE Acceleration Library, call

cmake --build build

This will build the Intel HE Acceleration Library library in the build/hexl/lib/ directory.

To install Intel HE Acceleration Library to the installation directory, run

cmake --install build

Windows

To compile Intel HE Acceleration Library on Windows using Visual Studio in Release mode, configure the build via

cmake -S . -B build -G "Visual Studio 16 2019" -DCMAKE_BUILD_TYPE=Release

adding the desired compile-time options with a -D flag (see Compile-time options). For instance, to use a non-standard installation directory, configure the build with

cmake -S . -B build -G "Visual Studio 16 2019" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/path/to/install

To specify the desired build configuration, pass either --config Debug or --config Release to the build step and install steps. For instance, to build Intel HE Acceleration Library in Release mode, call

cmake --build build --config Release

This will build the Intel HE Acceleration Library library in the build/hexl/lib/ or build/hexl/Release/lib directory.

To install Intel HE Acceleration Library to the installation directory, run

cmake --build build --target install --config Release

Performance

For best performance, we recommend using Intel HE Acceleration Library on a Linux system with the clang++-12 compiler. We also recommend using a processor with Intel AVX512DQ support, with best performance on processors supporting Intel AVX512-IFMA52. To determine if your processor supports AVX512-IFMA52, simply look for -- Setting HEXL_HAS_AVX512IFMA printed during the configure step.

See the below table for setting the modulus q for best performance.

Instruction Set Bound on modulus q
AVX512-DQ q < 2^30
AVX512-IFMA52 q < 2^50

Some speedup is still expected for moduli q > 2^30 using the AVX512-DQ instruction set.

Testing Intel HE Acceleration Library

To run a set of unit tests via Googletest, configure and build Intel HE Acceleration Library with -DHEXL_TESTING=ON (see Compile-time options). Then, run

cmake --build build --target unittest

The unit-test executable itself is located at build/test/unit-test on Linux and Mac, and at build\test\Release\unit-test.exe or build\test\Debug\unit-test.exe on Windows.

Benchmarking Intel HE Acceleration Library

To run a set of benchmarks via Google benchmark, configure and build Intel HE Acceleration Library with -DHEXL_BENCHMARK=ON (see Compile-time options). Then, run

cmake --build build --target bench

On Windows, run

cmake --build build --target bench --config Release

The benchmark executable itself is located at build/benchmark/bench_hexl on Linux and Mac, and at build\benchmark\Debug\bench_hexl.exe or build\benchmark\Release\bench_hexl.exe on Windows.

Using Intel HE Acceleration Library

The example folder has an example of using Intel HE Acceleration Library in a third-party project.

Debugging

For optimal performance, Intel HE Acceleration Library does not perform input validation. In many cases the time required for the validation would be longer than the execution of the function itself. To debug Intel HE Acceleration Library, configure and build Intel HE Acceleration Library with -DCMAKE_BUILD_TYPE=Debug (see Compile-time options). This will generate a debug version of the library, e.g. libhexl_debug.a, that can be used to debug the execution. In Debug mode, Intel HE Acceleration Library will also link against Address Sanitizer.

Note, enabling CMAKE_BUILD_TYPE=Debug will result in a significant runtime overhead.

To enable verbose logging for the benchmarks or unit-tests in a Debug build, add the log level as a command-line argument, e.g. build/benchmark/bench_hexl --v=9. See easyloggingpp's documentation for more details.

Threading

Intel HE Acceleration Library is single-threaded and thread-safe.

Community Adoption

Intel HE Acceleration Library has been integrated to the following homomorphic encryption libraries:

See also the Intel Homomorphic Encryption Toolkit for example uses cases using Intel HE Acceleration Library.

Please let us know if you are aware of any other uses of Intel HE Acceleration Library.

Documentation

Intel HE Acceleration Library supports documentation via Doxygen. See https://intel.github.io/hexl for the latest Doxygen documentation.

To build documentation, first install doxygen and graphviz, e.g.

sudo apt-get install doxygen graphviz

Then, configure Intel HE Acceleration Library with -DHEXL_DOCS=ON (see Compile-time options). To build Doxygen documentation, after configuring Intel HE Acceleration Library with -DHEXL_DOCS=ON, run

cmake --build build --target docs

To view the generated Doxygen documentation, open the generated docs/doxygen/html/index.html file in a web browser.

Contributing

Intel HE Acceleration Library welcomes external contributions. To know more about contributing please go to CONTRIBUTING.md.

We encourage feedback and suggestions via Github Issues as well as discussion via Github Discussions.

Repository layout

Public headers reside in the hexl/include folder. Private headers, e.g. those containing Intel(R) AVX-512 code should not be put in this folder.

Citing Intel HE Acceleration Library

To cite Intel HE Acceleration Library, please use the following BibTeX entry.

Version 1.2

    @misc{IntelHEXL,
        author={Boemer, Fabian and Kim, Sejun and Seifu, Gelila and de Souza, Fillipe DM and Gopal, Vinodh and others},
        title = {{I}ntel {HEXL} (release 1.2)},
        howpublished = {\url{https://github.com/intel/hexl}},
        month = september,
        year = 2021,
        key = {Intel HEXL}
    }

Version 1.1

    @misc{IntelHEXL,
        author={Boemer, Fabian and Kim, Sejun and Seifu, Gelila and de Souza, Fillipe DM and Gopal, Vinodh and others},
        title = {{I}ntel {HEXL} (release 1.1)},
        howpublished = {\url{https://github.com/intel/hexl}},
        month = may,
        year = 2021,
        key = {Intel HEXL}
    }

Version 1.0

    @misc{IntelHEXL,
        author={Boemer, Fabian and Kim, Sejun and Seifu, Gelila and de Souza, Fillipe DM and Gopal, Vinodh and others},
        title = {{I}ntel {HEXL} (release 1.0)},
        howpublished = {\url{https://github.com/intel/hexl}},
        month = april,
        year = 2021,
        key = {Intel HEXL}
    }

Contributors

The Intel contributors to this project, sorted by last name, are

In addition to the Intel contributors listed, we are also grateful to contributions to this project that are not reflected in the Git history:

More Repositories

1

hyperscan

High-performance regular expression matching library
C++
4,478
star
2

acat

Assistive Context-Aware Toolkit (ACAT)
C#
3,191
star
3

haxm

Intel® Hardware Accelerated Execution Manager (Intel® HAXM)
C
3,029
star
4

appframework

The definitive HTML5 mobile javascript framework
CSS
2,435
star
5

neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Python
2,182
star
6

intel-extension-for-transformers

⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
Python
2,122
star
7

pcm

Intel® Performance Counter Monitor (Intel® PCM)
C++
2,083
star
8

intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Python
1,203
star
9

linux-sgx

Intel SGX for Linux*
C++
1,180
star
10

scikit-learn-intelex

Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
Python
954
star
11

llvm

Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.
918
star
12

nemu

ARCHIVED: Modern Hypervisor for the Cloud. See https://github.com/cloud-hypervisor/cloud-hypervisor instead
C
915
star
13

compute-runtime

Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
C++
912
star
14

caffe

This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® Xeon processors.
C++
850
star
15

isa-l

Intelligent Storage Acceleration Library
C
816
star
16

media-driver

C
783
star
17

cve-bin-tool

The CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 200 common, vulnerable components (openssl, libpng, libxml2, expat and others), or if you know the components used, you can get a list of known vulnerabilities associated with an SBOM or a list of components and versions.
Python
721
star
18

intel-cmt-cat

User space software for Intel(R) Resource Director Technology
C
630
star
19

fastuidraw

C++
603
star
20

optimization-manual

Contains the source code examples described in the "Intel® 64 and IA-32 Architectures Optimization Reference Manual"
Assembly
602
star
21

libipt

libipt - an Intel(R) Processor Trace decoder library
C
594
star
22

libxcam

libXCam is a project for extended camera(not limited in camera) features and focus on image quality improvement and video analysis. There are lots features supported in image pre-processing, image post-processing and smart analysis. This library makes GPU/CPU/ISP working together to improve image quality. OpenCL is used to improve performance in different platforms.
C++
590
star
23

clDNN

Compute Library for Deep Neural Networks (clDNN)
C++
573
star
24

libva

Libva is an implementation for VA-API (Video Acceleration API)
C
558
star
25

intel-graphics-compiler

C++
503
star
26

wds

Wireless Display Software For Linux OS (WDS)
C++
496
star
27

thermal_daemon

Thermal daemon for IA
C++
485
star
28

x86-simd-sort

C++ header file library for high performance SIMD based sorting algorithms for primitive datatypes
C++
485
star
29

Intel-Linux-Processor-Microcode-Data-Files

466
star
30

gvt-linux

C
463
star
31

kernel-fuzzer-for-xen-project

Kernel Fuzzer for Xen Project (KF/x) - Hypervisor-based fuzzing using Xen VM forking, VMI & AFL
C
441
star
32

tinycbor

Concise Binary Object Representation (CBOR) Library
C
432
star
33

openfl

An open framework for Federated Learning.
Python
427
star
34

cc-oci-runtime

OCI (Open Containers Initiative) compatible runtime for Intel® Architecture
C
415
star
35

tinycrypt

tinycrypt is a library of cryptographic algorithms with a focus on small, simple implementation.
C
373
star
36

compile-time-init-build

C++ library for composing modular firmware at compile-time.
C++
372
star
37

ARM_NEON_2_x86_SSE

The platform independent header allowing to compile any C/C++ code containing ARM NEON intrinsic functions for x86 target systems using SIMD up to SSE4 intrinsic functions
C
369
star
38

yarpgen

Yet Another Random Program Generator
C++
357
star
39

intel-device-plugins-for-kubernetes

Collection of Intel device plugins for Kubernetes
Go
356
star
40

QAT_Engine

Intel QuickAssist Technology( QAT) OpenSSL Engine (an OpenSSL Plug-In Engine) which provides cryptographic acceleration for both hardware and optimized software using Intel QuickAssist Technology enabled Intel platforms. https://developer.intel.com/quickassist
C
356
star
41

linux-sgx-driver

Intel SGX Linux* Driver
C
334
star
42

safestringlib

C
328
star
43

xess

C
313
star
44

idlf

Intel® Deep Learning Framework
C++
311
star
45

ad-rss-lib

Library implementing the Responsibility Sensitive Safety model (RSS) for Autonomous Vehicles
C++
298
star
46

intel-vaapi-driver

VA-API user mode driver for Intel GEN Graphics family
C
289
star
47

ipp-crypto

C
269
star
48

rohd

The Rapid Open Hardware Development (ROHD) framework is a framework for describing and verifying hardware in the Dart programming language. ROHD enables you to build and traverse a graph of connectivity between module objects using unrestricted software.
Dart
256
star
49

opencl-intercept-layer

Intercept Layer for Debugging and Analyzing OpenCL Applications
C++
255
star
50

FSP

Intel(R) Firmware Support Package (FSP)
C
244
star
51

dffml

The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
Python
244
star
52

userspace-cni-network-plugin

Go
242
star
53

intel-ipsec-mb

Intel(R) Multi-Buffer Crypto for IPSec
C
238
star
54

isa-l_crypto

Assembly
232
star
55

confidential-computing-zoo

Confidential Computing Zoo provides confidential computing solutions based on Intel SGX, TDX, HEXL, etc. technologies.
CMake
229
star
56

bmap-tools

BMAP Tools
Python
227
star
57

intel-extension-for-tensorflow

Intel® Extension for TensorFlow*
C++
226
star
58

ozone-wayland

Wayland implementation for Chromium Ozone classes
C++
214
star
59

intel-qs

High-performance simulator of quantum circuits
C++
202
star
60

SGXDataCenterAttestationPrimitives

C++
202
star
61

intel-sgx-ssl

Intel® Software Guard Extensions SSL
C
197
star
62

msr-tools

C
195
star
63

depth-camera-web-demo

JavaScript
194
star
64

rmd

Go
189
star
65

CPU-Manager-for-Kubernetes

Kubernetes Core Manager for NFV workloads
Python
187
star
66

asynch_mode_nginx

C
186
star
67

ros_object_analytics

C++
177
star
68

zephyr.js

JavaScript* Runtime for Zephyr* OS
C
176
star
69

generic-sensor-demos

HTML
175
star
70

ipmctl

C
172
star
71

sgx-ra-sample

C++
171
star
72

lmbench

C
171
star
73

cri-resource-manager

Kubernetes Container Runtime Interface proxy service with hardware resource aware workload placement policies
Go
170
star
74

platform-aware-scheduling

Enabling Kubernetes to make pod placement decisions with platform intelligence.
Go
165
star
75

virtual-storage-manager

Python
165
star
76

PerfSpect

System performance characterization tool based on linux perf
Python
164
star
77

he-transformer

nGraph-HE: Deep learning with Homomorphic Encryption (HE) through Intel nGraph
C++
163
star
78

systemc-compiler

This tool translates synthesizable SystemC code to synthesizable SystemVerilog.
C++
155
star
79

webml-polyfill

Deprecated, the Web Neural Network Polyfill project has been moved to https://github.com/webmachinelearning/webnn-polyfill
Python
153
star
80

pmem-csi

Persistent Memory Container Storage Interface Driver
Go
151
star
81

libyami

Yet Another Media Infrastructure. it is core part of media codec with hardware acceleration, it is yummy to your video experience on Linux like platform.
C++
148
star
82

ros_openvino_toolkit

C++
147
star
83

rib

Rapid Interface Builder (RIB) is a browser-based design tool for quickly prototyping and creating the user interface for web applications. Layout your UI by dropping widgets onto a canvas. Run the UI in an interactive "Preview mode". Export the generated HTML and Javascript. It's that simple!
JavaScript
147
star
84

ideep

Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.
C++
145
star
85

libva-utils

Libva-utils is a collection of tests for VA-API (VIdeo Acceleration API)
C
144
star
86

gmmlib

C++
141
star
87

numatop

NumaTOP is an observation tool for runtime memory locality characterization and analysis of processes and threads running on a NUMA system.
C
139
star
88

ros2_grasp_library

C++
138
star
89

XBB

C++
133
star
90

tdx-tools

Cloud Stack and Tools for Intel TDX (Trust Domain Extension)
C
131
star
91

ros2_intel_realsense

This project is deprecated and no more maintained. Please visit https://github.com/IntelRealSense/realsense-ros for ROS2 wrapper.
C++
131
star
92

linux-intel-lts

C
131
star
93

CeTune

Python
130
star
94

cm-compiler

C++
130
star
95

pti-gpu

Profiling Tools Interfaces for GPU (PTI for GPU) is a set of Getting Started Documentation and Tools Library to start performance analysis on Intel(R) Processor Graphics easily
C++
129
star
96

fMBT

Free Model Based tool
Python
129
star
97

zlib

C
128
star
98

ros_intel_movidius_ncs

C++
126
star
99

mpi-benchmarks

C
125
star
100

mOS

C
124
star