• Stars
    star
    954
  • Rank 46,035 (Top 1.0 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 5 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application

Intel(R) Extension for Scikit-learn*

Installation   |   Documentation   |   Examples   |   Support   |   FAQ   

Build Status Coverity Scan Build Status Join the community on GitHub Discussions PyPI Version Conda Version python version scikit-learn supported versions

With Intel(R) Extension for Scikit-learn you can accelerate your Scikit-learn applications and still have full conformance with all Scikit-Learn APIs and algorithms. This is a free software AI accelerator that brings over 10-100X acceleration across a variety of applications. And you do not even need to change the existing code!

How it works?

Intel(R) Extension for Scikit-learn offers you a way to accelerate existing scikit-learn code. The acceleration is achieved through patching: replacing the stock scikit-learn algorithms with their optimized versions provided by the extension.

One of the ways to patch scikit-learn is by modifying the code. First, you import an additional Python package (sklearnex) and enable optimizations via sklearnex.patch_sklearn(). Then import scikit-learn estimators:

  • Enable Intel CPU optimizations

    import numpy as np
    from sklearnex import patch_sklearn
    patch_sklearn()
    
    from sklearn.cluster import DBSCAN
    
    X = np.array([[1., 2.], [2., 2.], [2., 3.],
                [8., 7.], [8., 8.], [25., 80.]], dtype=np.float32)
    clustering = DBSCAN(eps=3, min_samples=2).fit(X)
  • Enable Intel GPU optimizations

    import numpy as np
    import dpctl
    from sklearnex import patch_sklearn, config_context
    patch_sklearn()
    
    from sklearn.cluster import DBSCAN
    
    X = np.array([[1., 2.], [2., 2.], [2., 3.],
                [8., 7.], [8., 8.], [25., 80.]], dtype=np.float32)
    with config_context(target_offload="gpu:0"):
        clustering = DBSCAN(eps=3, min_samples=2).fit(X)

👀 Read about other ways to patch scikit-learn and other methods for offloading to GPU devices. Check out available notebooks for more examples.

This software acceleration is achieved through the use of vector instructions, IA hardware-specific memory optimizations, threading, and optimizations for all upcoming Intel platforms at launch time.

Supported Algorithms

❗ The patching only affects selected algorithms and their parameters.

You may still use algorithms and parameters not supported by Intel(R) Extension for Scikit-learn in your code. You will not get an error if you do this. When you use algorithms or parameters not supported by the extension, the package fallbacks into original stock version of scikit-learn.

🚀 Acceleration

Configurations:

  • HW: c5.24xlarge AWS EC2 Instance using an Intel Xeon Platinum 8275CL with 2 sockets and 24 cores per socket
  • SW: scikit-learn version 0.24.2, scikit-learn-intelex version 2021.2.3, Python 3.8

Benchmarks code

🛠 Installation

System Requirements   |    Install via pip or conda   |   Build from sources

Intel(R) Extension for Scikit-learn is available at the Python Package Index, on Anaconda Cloud in Conda-Forge channel and in Intel channel. You can also build the extension from sources.

The extension is also available as a part of Intel® AI Analytics Toolkit (AI Kit). If you already have AI Kit installed, you do not need to install the extension.

Installation via pip package manager is recommended by default:

pip install scikit-learn-intelex

🔗 Important Links

👀 Follow us on Medium

We publish blogs on Medium, so follow us to learn tips and tricks for more efficient data analysis with the help of Intel(R) Extension for Scikit-learn. Here are our latest blogs:

❔ FAQ

[See answers to frequently asked questions]

❓ Are all algorithms affected by patching?

No. The patching only affects selected algorithms and their parameters.

❓ What happens if I use parameters not supported by the extension?

In cases when unsupported parameters are used, the package fallbacks into original stock version of scikit-learn. You will not get an error.

❓ What happens if I run algorithms not supported by the extension?

If you use algorithms for which no optimizations are available, their original version from the stock scikit-learn is used.

❓ Can I see which implementation of the algorithm is currently used?

Yes. To find out which implementation of the algorithm is currently used (Intel(R) Extension for Scikit-learn or original Scikit-learn), use the verbose mode.

❓ How much faster scikit-learn is after the patching?

We compare the performance of Intel(R) Extension for Scikit-Learn to other frameworks in Machine Learning Benchmarks. Read our blogs on Medium if you are interested in the detailed comparison.

❓ What if the patching does not cover my scenario?

If the patching does not cover your scenarios, submit an issue on GitHub with the description of what you would want to have.

💬 Support

Report issues, ask questions, and provide suggestions using:

You may reach out to project maintainers privately at [email protected]

oneAPI

Intel(R) Extension for Scikit-learn is part of oneAPI and Intel® AI Analytics Toolkit (AI Kit).

daal4py and oneDAL

The acceleration is achieved through the use of the Intel(R) oneAPI Data Analytics Library (oneDAL). Learn more:


⚠️Intel(R) Extension for Scikit-learn contains scikit-learn patching functionality that was originally available in daal4py package. All future updates for the patches will be available only in Intel(R) Extension for Scikit-learn. We recommend you to use scikit-learn-intelex package instead of daal4py. You can learn more about daal4py in daal4py documentation.


More Repositories

1

hyperscan

High-performance regular expression matching library
C++
4,478
star
2

acat

Assistive Context-Aware Toolkit (ACAT)
C#
3,191
star
3

haxm

Intel® Hardware Accelerated Execution Manager (Intel® HAXM)
C
3,029
star
4

appframework

The definitive HTML5 mobile javascript framework
CSS
2,435
star
5

pcm

Intel® Performance Counter Monitor (Intel® PCM)
C++
2,083
star
6

neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Python
1,939
star
7

intel-extension-for-transformers

⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
Python
1,910
star
8

intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Python
1,203
star
9

linux-sgx

Intel SGX for Linux*
C++
1,180
star
10

llvm

Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.
918
star
11

nemu

ARCHIVED: Modern Hypervisor for the Cloud. See https://github.com/cloud-hypervisor/cloud-hypervisor instead
C
915
star
12

compute-runtime

Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
C++
912
star
13

caffe

This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® Xeon processors.
C++
845
star
14

isa-l

Intelligent Storage Acceleration Library
C
816
star
15

media-driver

C
783
star
16

cve-bin-tool

The CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 200 common, vulnerable components (openssl, libpng, libxml2, expat and others), or if you know the components used, you can get a list of known vulnerabilities associated with an SBOM or a list of components and versions.
Python
721
star
17

intel-cmt-cat

User space software for Intel(R) Resource Director Technology
C
630
star
18

fastuidraw

C++
603
star
19

optimization-manual

Contains the source code examples described in the "Intel® 64 and IA-32 Architectures Optimization Reference Manual"
Assembly
602
star
20

libipt

libipt - an Intel(R) Processor Trace decoder library
C
594
star
21

libxcam

libXCam is a project for extended camera(not limited in camera) features and focus on image quality improvement and video analysis. There are lots features supported in image pre-processing, image post-processing and smart analysis. This library makes GPU/CPU/ISP working together to improve image quality. OpenCL is used to improve performance in different platforms.
C++
577
star
22

clDNN

Compute Library for Deep Neural Networks (clDNN)
C++
573
star
23

libva

Libva is an implementation for VA-API (Video Acceleration API)
C
558
star
24

intel-graphics-compiler

C++
503
star
25

wds

Wireless Display Software For Linux OS (WDS)
C++
496
star
26

thermal_daemon

Thermal daemon for IA
C++
485
star
27

x86-simd-sort

C++ header file library for high performance SIMD based sorting algorithms for primitive datatypes
C++
485
star
28

Intel-Linux-Processor-Microcode-Data-Files

466
star
29

gvt-linux

C
463
star
30

kernel-fuzzer-for-xen-project

Kernel Fuzzer for Xen Project (KF/x) - Hypervisor-based fuzzing using Xen VM forking, VMI & AFL
C
441
star
31

tinycbor

Concise Binary Object Representation (CBOR) Library
C
432
star
32

openfl

An open framework for Federated Learning.
Python
427
star
33

cc-oci-runtime

OCI (Open Containers Initiative) compatible runtime for Intel® Architecture
C
415
star
34

tinycrypt

tinycrypt is a library of cryptographic algorithms with a focus on small, simple implementation.
C
373
star
35

compile-time-init-build

C++ library for composing modular firmware at compile-time.
C++
372
star
36

ARM_NEON_2_x86_SSE

The platform independent header allowing to compile any C/C++ code containing ARM NEON intrinsic functions for x86 target systems using SIMD up to SSE4 intrinsic functions
C
369
star
37

yarpgen

Yet Another Random Program Generator
C++
357
star
38

intel-device-plugins-for-kubernetes

Collection of Intel device plugins for Kubernetes
Go
356
star
39

QAT_Engine

Intel QuickAssist Technology( QAT) OpenSSL Engine (an OpenSSL Plug-In Engine) which provides cryptographic acceleration for both hardware and optimized software using Intel QuickAssist Technology enabled Intel platforms. https://developer.intel.com/quickassist
C
356
star
40

linux-sgx-driver

Intel SGX Linux* Driver
C
334
star
41

safestringlib

C
328
star
42

xess

C
313
star
43

idlf

Intel® Deep Learning Framework
C++
311
star
44

ad-rss-lib

Library implementing the Responsibility Sensitive Safety model (RSS) for Autonomous Vehicles
C++
298
star
45

intel-vaapi-driver

VA-API user mode driver for Intel GEN Graphics family
C
289
star
46

ipp-crypto

C
269
star
47

rohd

The Rapid Open Hardware Development (ROHD) framework is a framework for describing and verifying hardware in the Dart programming language. ROHD enables you to build and traverse a graph of connectivity between module objects using unrestricted software.
Dart
256
star
48

opencl-intercept-layer

Intercept Layer for Debugging and Analyzing OpenCL Applications
C++
255
star
49

FSP

Intel(R) Firmware Support Package (FSP)
C
244
star
50

dffml

The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
Python
241
star
51

intel-ipsec-mb

Intel(R) Multi-Buffer Crypto for IPSec
C
238
star
52

userspace-cni-network-plugin

Go
232
star
53

isa-l_crypto

Assembly
232
star
54

confidential-computing-zoo

Confidential Computing Zoo provides confidential computing solutions based on Intel SGX, TDX, HEXL, etc. technologies.
CMake
229
star
55

intel-extension-for-tensorflow

Intel® Extension for TensorFlow*
C++
226
star
56

bmap-tools

BMAP Tools
Python
220
star
57

ozone-wayland

Wayland implementation for Chromium Ozone classes
C++
214
star
58

intel-qs

High-performance simulator of quantum circuits
C++
202
star
59

SGXDataCenterAttestationPrimitives

C++
202
star
60

intel-sgx-ssl

Intel® Software Guard Extensions SSL
C
197
star
61

msr-tools

C
195
star
62

depth-camera-web-demo

JavaScript
194
star
63

CPU-Manager-for-Kubernetes

Kubernetes Core Manager for NFV workloads
Python
190
star
64

rmd

Go
189
star
65

asynch_mode_nginx

C
186
star
66

hexl

Intel®️ Homomorphic Encryption Acceleration Library accelerates modular arithmetic operations used in homomorphic encryption
C++
181
star
67

ros_object_analytics

C++
177
star
68

zephyr.js

JavaScript* Runtime for Zephyr* OS
C
176
star
69

generic-sensor-demos

HTML
175
star
70

ipmctl

C
172
star
71

sgx-ra-sample

C++
171
star
72

lmbench

C
171
star
73

cri-resource-manager

Kubernetes Container Runtime Interface proxy service with hardware resource aware workload placement policies
Go
166
star
74

virtual-storage-manager

Python
164
star
75

PerfSpect

System performance characterization tool based on linux perf
Python
164
star
76

systemc-compiler

This tool translates synthesizable SystemC code to synthesizable SystemVerilog.
C++
155
star
77

webml-polyfill

Deprecated, the Web Neural Network Polyfill project has been moved to https://github.com/webmachinelearning/webnn-polyfill
Python
153
star
78

pmem-csi

Persistent Memory Container Storage Interface Driver
Go
151
star
79

libyami

Yet Another Media Infrastructure. it is core part of media codec with hardware acceleration, it is yummy to your video experience on Linux like platform.
C++
148
star
80

ros_openvino_toolkit

C++
147
star
81

rib

Rapid Interface Builder (RIB) is a browser-based design tool for quickly prototyping and creating the user interface for web applications. Layout your UI by dropping widgets onto a canvas. Run the UI in an interactive "Preview mode". Export the generated HTML and Javascript. It's that simple!
JavaScript
147
star
82

ideep

Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.
C++
145
star
83

libva-utils

Libva-utils is a collection of tests for VA-API (VIdeo Acceleration API)
C
144
star
84

gmmlib

C++
141
star
85

platform-aware-scheduling

Enabling Kubernetes to make pod placement decisions with platform intelligence.
Go
140
star
86

numatop

NumaTOP is an observation tool for runtime memory locality characterization and analysis of processes and threads running on a NUMA system.
C
139
star
87

ros2_grasp_library

C++
138
star
88

XBB

C++
133
star
89

tdx-tools

Cloud Stack and Tools for Intel TDX (Trust Domain Extension)
C
131
star
90

ros2_intel_realsense

This project is deprecated and no more maintained. Please visit https://github.com/IntelRealSense/realsense-ros for ROS2 wrapper.
C++
131
star
91

linux-intel-lts

C
131
star
92

CeTune

Python
130
star
93

cm-compiler

C++
130
star
94

pti-gpu

Profiling Tools Interfaces for GPU (PTI for GPU) is a set of Getting Started Documentation and Tools Library to start performance analysis on Intel(R) Processor Graphics easily
C++
129
star
95

fMBT

Free Model Based tool
Python
129
star
96

zlib

C
128
star
97

ros_intel_movidius_ncs

C++
126
star
98

mpi-benchmarks

C
125
star
99

mOS

C
124
star
100

sgx-software-enable

C
122
star