• Stars
    star
    126
  • Rank 284,543 (Top 6 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created almost 2 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Intel Neuromorphic DNS Challenge

Readme

solution_structure_2023-01-24

The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge) is a contest to help neuromorphic and machine learning researchers create high-quality and low-power real-time audio denoising systems. The Intel N-DNS challenge is inspired by the Microsoft DNS Challenge, and it re-uses the Microsoft DNS Challenge noisy and clean speech datasets. This repository contains the challenge information, code, and documentation to get started with Intel N-DNS Challenge.

A solution to the Intel N-DNS Challenge consists of an audio encoder, a neuromorphic denoiser, and an audio decoder. Noisy speech is input to the encoder, which converts the audio waveform into a form suitable for processing in the neuromorphic denoiser. The neuromorphic denoiser takes this input and removes noise from the signal. Finally, the decoder converts the output of the neuromorphic denoiser into a clean output audio waveform. The Intel N-DNS Challenge consists of two tracks:

Track 1 (Algorithmic) aims to encourage algorithmic innovation that leads to a higher denoising performance while being efficient when implemented as a neuromorphic system. The encoder, decoder, and neuromorphic denoiser all run on CPU.

Track 2 (Loihi 2) aims to realize the algorithmic innovation in Track 1 on actual neuromorphic hardware and demonstrate a real-time denoising system. The encoder and decoder run on CPU and the neuromorphic denoiser runs on Loihi 2.

Solutions submitted to the Intel N-DNS challenge are evaluated in terms of an audio quality metric (denoising task performance) and computational resource usage metrics, which measure the efficiency of the solution as a system; submissions also include source code and a short write-up. Solutions will be holistically considered (metrics, write-up, innovativeness, commercial relevance, etc.) by an Intel committee for a monetary prize (details below).

Please see our paper in Neuromorphic Computing and Engineering (also on arXiv) for a more detailed overview of the challenge.

Table of Contents

How to participate?

Follow the registration instructions below to participate. The overview of the challenge timeline is shown below.

gantt
    Title Neuromorphic DNS Challenge
    dateFormat  MM-DD-YYYY
    axisFormat  %m-%d-%Y

    Challenge start :milestone, s0, 03-16-2023, 0d

    section Track 1
    Track 1 solution development :active, t0, after s0, 155d
    Test Set 1 release           :milestone, after t0
    Track 1 submission           :t2, after t0, 30d
    Model freeze                 :crit, t2, after t0, 30d
    Track 1 evaluation           :t3, after t2, 15d
    Track 1 winner announcement  :crit, milestone, aftet t3
    
    section Track 2
    Track 2 solution development :tt0, after t0, 182d
    Test Set 2 release           :milestone, after tt0
    Track 2 submission           :tt2, after tt0, 30d
    Model freeze                 :crit, tt2, after tt0, 30d
    Track 2 evaluation           :tt3, after tt2, 15d
    Challenge winner announcement  :crit, milestone, aftet t3

Important dates

Phase Date
Challenge start Mar 16, 2023
Test set 1 release On or about Aug 18, 2023
Track 1 submission deadline On or about Sep 18, 2023
Track 1 winner announcement Oct 2, 2023
Test set 2 release On or about Jan 28, 2024
Track 2 submission deadline On or about Feb 28, 2024
Track 2 winner announcement Mar 14, 2024

Challenge dates are subject to change. Registered participants shall be notified of any changes in the dates or fixation of on or about dates.

1. Registration

  1. Create your challenge github repo (public or private) and provide access to lava-nc-user user.
  2. Register for the challenge here.
  3. You will receive a registration confirmation.

Once registered, you will receive updates about different phases of the challenges.

Participation for Track 2 will need Loihi system cloud access which needs an Intel Neuromorphic Research Collaboration agreement. Please see Join the INRC or drop an email to [email protected]. This process might take a while, so it is recommended to initiate this process as early as possible if you want to participate in Track 2.

2. Test Set 1 Release

The test set for Track 1 has been released, and we are currently in the Track 1 model freeze phase. The details on test set 1 can be found here.

  • Participants shall not change their model during this phase.
  • Participants shall evaluate their model on test set 1, measure all the necessary metrics on an Intel Core i5 quad-core machine clocked at 2.4 GHz or weaker, and submit their metrics to the test set metricsboard, along with a solution writeup.

Important: At least one validation metricsboard entry must have been submitted before the Track 1 model freeze phase. Metricboard entries will be randomly verified.

3. Track 1 Winner

A committee of Intel employees will evaluate the Track 1 solutions to decide the winners, making a holistic evaluation including audio quality, computational resource usage, solution write-up quality, innovativeness, and commercial relevance.

Important: Intel reserves the right to consider and evaluate submissions at its discretion. Implementation and management of this challenge and associated prizes are subject to change at any time without notice to contest participants or winners and is at the complete discretion of Intel.

4. Test set 2 Release

Once the test set 2 for Track 2 is released, we will enter Track 2 model freeze phase. The details on test set 2 will be updated later.

  • Participants shall not change their model during this phase.
  • Participants shall evaluate their model on test set 2, measure all the necessary metrics on Loihi, and submit their metrics along with a solution writeup.

Important: At least one valid metric board entry must have been submitted before Track 2 model freeze phase. Metricboard entries will be randomly verified.

5. Track 2 Winner (Challenge Winner)

A committee of Intel employees will evaluate the Track 2 solutions to decide the winners, making a holistic evaluation including audio quality, computational resource usage, solution write-up quality, innovativeness, and commercial relevance.

Important: Intel reserves the right to consider and evaluate submissions at its discretion. Implementation and management of this challenge and associated prizes are subject to change at any time without notice to contest participants or winners and is at the complete discretion of Intel.

Prize

There will be two prizes awarded

  • Track 1 winner: fifteen thousand dollars (US $15,000.00) or the equivalent in grant money to the winner of Track 1

and six months later,

  • Track 2 winner: forty thousand dollars (US $40,000.00) or the equivalent in grant money to the winner of Track 2.

These awards will be made based on the judging of the Intel committee. Where the winner is a resident from one of the named countries in the Intel N-DNS Challenge Rules and not a government employee, Intel can directly award the prize money to the winner. Where the winner is a government employee to which Intel can administer academic grant funding (regardless of whether the winner resides in one of the named countries in the Intel N-DNS Challenge Rules); a research grant in the amount for the appropriate track will be awarded to the university where the researcher/government employee is from, and in the researcher's name. Where the winner does not fall into the above categories, Intel will publicly recognize the winner, but the winner is not eligible to receive a prize. Limit of one prize per submission.

Important:

  • Researchers affiliated with universities worldwide, not restricted to the countries listed in the N-DNS Challenge Rules, are also eligible to receive prizes that Intel will administer. This includes, but is not limited to, government employees such as professors, research associates, postdoctoral research fellows, and research scientists employed by a state-funded university or research institution. Where possible, Intel will provide unrestricted gift funding to the awardee's department or group. However, universities in countries under U.S. embargo are not eligible to receive award funding.
  • Other individuals that do not fall into the above categories, but wish to enter this Contest, may do so. However, they are not eligible for any prize, but will be publicly recognized if they win. See Prizes under N-DNS Challenge Rules for further details.
  • For avoidance of doubt, Intel has the sole discretion to determine the category of the entries to the N-DNS Award contest.

Solution Writeup

We also ask that challenge participants submit a short (one or two page) write-up that explains the thought process that went into developing their solution. Please include:

  • What worked, what did not work, and why certain strategies were chosen versus others. While audio quality and power are key metrics for evaluating solutions, the overarching goal of this challenge is to drive neuromorphic algorithm innovation, and challenge participant learnings are extremely valuable.
  • A clear table with the test set 1 evaluation metrics for your solution akin to the Table in the Metricsboard.
  • Brief instructions for how to train your model and run test set inference. (E.g., path to a training & inference script in your Github repository)
  • Brief instructions on how to run inference in Lava for your model. (E.g., path to an example python notebook with a basic Lava process diagram like baseline_solution/sdnn_delays/lava_inference.ipynb

For your writeup, please use a single-column Word document or Latex template with 1-inch margins, single-spacing, reasonable font size (11pt or 12pt; default font like Times New Roman), and up to two US letter-size or A4 pages. Please submit a PDF. Please upload your writeup PDF to the top level of your Github repository with filename writeup.pdf.

Please note that each team submits a single write-up. If a team is submitting multiple models to the Metricsboard, a single write-up should describe all models from that team. This write-up can be submitted directly to Intel to maintain privacy before the track deadline, but for the write-up to be considered in the holistic evaluation of the solution for the monetary prize, we require that it be shared publicly within 14 days after the test set evaluation deadline for each track. Naturally, however, we encourage participants to share their write-ups publicly at any time, to help inspire others' solutions.

Additionally, we plan to invite a select group of challenge participants to present their solutions at a future Intel Neuromorphic Research Community (INRC) forum, based on their algorithmic innovation and metricsboard results as judged by the Intel committee, to share their learnings and participate in a discussion on developing new and improved neuromorphic computing challenges.

Source code

Challenge participants must provide the source code used in the creation of their solution (model definition, final trained model, training scripts, inference scripts, etc.) with MIT or BSD3 license.

Challenge participant source code for Track 1 will be publicly released after the Track 1 winner is announced. Likewise for Track 2.

Install Instructions

pip install -r requirements.txt
python -c "import os; from distutils.sysconfig import get_python_lib; open(get_python_lib() + os.sep + 'ndns.pth', 'a').write(os.getcwd())"

Uninstall Instructions

python -c "import os; from distutils.sysconfig import get_python_lib; pth = get_python_lib() + os.sep + 'ndns.pth'; os.remove(pth) if os.path.exists(pth) else None;"

Dataset

1. Download steps

  • Edit microsoft_dns/download-dns-challenge-4.sh to point the desired download location and downloader
  • bash microsoft_dns/download-dns-challenge-4.sh
  • Extract all the *.tar.bz2 files.

2. Download verification

  • Download SHA2 checksums and extract it.
  • Run the following to verify dataset validity.
    import pandas as pd
    import hashlib
    
    def sha1_hash(file_name: str) -> str:
        file_hash = hashlib.sha1()
        with open(file_name, 'rb') as f: fb = f.read()
        file_hash.update(fb)
        return file_hash.hexdigest()
    
    sha1sums = pd.read_csv("dns4-datasets-files-sha1.csv.bz2", names=["size", "sha1", "path"])
    file_not_found = []
    for idx in range(len(sha1sums)):
        try:
            if sha1_hash(sha1sums['path'][idx]) != sha1sums['sha1'][idx]:
                print(sha1sums['path'][idx], 'is corrupted')
        except FileNotFoundError as e:
            file_not_found.append(sha1sums['path'][idx])
    
    # 336494 files
    with open('missing.log', 'wt') as f:
        f.write('\n'.join(file_not_found))

3. Training/Validation data synthesization

  • Training dataset: python noisyspeech_synthesizer.py -root <your dataset folder>
  • Validation dataset: python noisyspeech_synthesizer.py -root <your dataset folder> -is_validation_set true

4. Testing data

  • Testing dataset for track 1 can be downloaded by executing the download script ./test_set_1/download.sh
    • Note: The test set download makes use of git large file system (GIT LFS). Make sure you have installed git-lfs git lfs install

    • The download script will printout further commands to
      1. verify the dataset files and
      2. extract the audio data. The default extraction folder is data/MicrosoftDNS_4_ICASSP/test_set_1/
  • Testing data with similar statistics as the validation dataset generated from the script above will be made available towards the end of each track 2 as well.

Dataloader

from audio_dataloader import DNSAudio

train_set = DNSAudio(root=<your dataset folder> + 'training_set/')
validation_set = DNSAudio(root=<your dataset folder> + 'validation_set/')
test_set_1 = DNSAudio(root=<your dataset folder> + 'test_set_1/')

Baseline Solution

The baseline solution is described in the Intel N-DNS Challenge paper.

The code for training and running the baseline solution can be found in this directory: baseline_solution/sdnn_delays.

The training script baseline_solution/sdnn_delays/train_sdnn.py is run as follows:

python train_sdnn.py # + optional arguments

Evaluation Metrics

The N-DNS solution will be evaluated based on multiple different metrics.

  1. SI-SNR of the solution
  2. SI-SNRi of the solution (improvement against both noisy data and encode+decode processing).
  3. DNSMOS quality of the solution (overall, signal, background)
  4. Latency of the solution (encode & decode latency + data buffer latency + DNS network latency)
  5. Power of the N-DNS network (proxy for Track 1)
  6. Power Delay Product (PDP) of the N-DNS solution (proxy for Track 1)

SI-SNR

This repo provides SI-SNR module which can be used to evaluate SI-SNR and SI-SNRi metrics.

$\displaystyle\text{SI-SNR} = 10\ \log_{10}\frac{\Vert s_\text{target}\Vert ^2}{\Vert e_\text{noise}\Vert ^2}$

where
$s = \text{zero mean target signal}$
$\hat{s} = \text{zero mean estimate signal}$
$s_\text{target} = \displaystyle\frac{\langle\hat s, s\rangle,s}{\Vert s \Vert ^2}$
$e_\text{noise} = \hat s - s_\text{target}$

  • In Code Evaluation
    from snr import si_snr
    score = si_snr(clean, noisy)

DNSMOS (MOS)

This repo provides DNSMOS module which is wrapped from Microsoft DNS challenge. The resulting array is a DNSMOS score (overall, signal, noisy). It also supports batched evaluation.

  • In Code Evaluation
    from dnsmos import DNSMOS
    dnsmos = DNSMOS()
    quality = dnsmos(noisy)  # It is in order [ovrl, sig, bak]

Other metrics are specific to the N-DNS solution system. For reference, a detailed walkthrough of the evaluation of the baseline solution is described in baseline_solution/sdnn_delays/evaluate_network.ipynb.

Please refer to the Intel N-DNS Challenge paper for more details about the metrics.

Metricsboard

The evaluation metrics for participant solutions will be listed below and updated at regular intervals.

Submitting to the metricsboard will help you meaure the progress of your solution against other participating teams. Earlier submissions are encouraged.

To submit to the metricsboard, please create a .yml file with contents akin to the table below in the top level of the Github repository that you share with Intel so that we can import your metrics and update them on the public metricsboard. Please use example_metricsboard_writeout.py as an example for how to generate a valid .yml file with standard key names. For the Track 1 validation set, name the .yml file metricsboard_track_1_validation.yml. For Track 1 test set, name the .yml file metricsboard_track_1_test.yml.

Track 1 (Validation Set)

Entry $\text{SI-SNR}$ (dB) $\text{SI-SNRi}$ data (dB) $\text{SI-SNRi}$ enc+dec (dB) $\text{MOS}$ (ovrl) $\text{MOS}$ (sig) $\text{MOS}$ (bak) $\text{latency}$ enc+dec (ms) $\text{latency}$ total (ms) $\text{Power}$ $\text{proxy}$ (M-Ops/s) $\text{PDP}$ $\text{proxy}$ (M-Ops) $\text{Params}$ ($\times 10^3$) $\text{Size}$ (KB)
Team xyz (mm/dd/yyyy)
Clairaudience (ALIF 2023-07-26) 13.68 6.79 6.79 0.35 0.06 0.95 0.04 16.04 14.60 0.23 1,580.00 6,320.00
Clairaudience (model_L 2023-07-27) 14.51 7.62 7.62 0.61 0.21 1.31 0.04 8.04 74.10 0.60 1,289.00 5,156.00
Clairaudience (model_M 2023-07-26) 14.50 7.61 7.61 0.62 0.22 1.31 0.04 8.04 53.60 0.43 954.00 3,816.00
Clairaudience (model_S 2023-07-25) 13.67 6.78 6.78 0.55 0.15 1.27 0.04 8.04 29.00 0.23 512.00 2,048.00
Clairaudience (model_XL 2023-07-27) 14.93 8.04 8.04 0.65 0.25 1.32 0.04 8.04 55.91 0.45 1,798.00 7,192.00
NECOTIS (PSNN - K3 2023-08-03) 12.40 5.03 5.03 2.65 2.91 3.94 0.06 32.06 56.00 1.80 723.71 2,827.00
NECOTIS (PSNN - With binary input spike encoding 2023-07-27) 13.22 5.85 5.85 2.85 3.26 3.72 0.06 32.06 88.66 2.84 1,512.19 5,907.00
NECOTIS (PSNN 2023-07-27) 14.02 6.64 6.64 2.88 3.25 3.78 0.00 32.00 92.86 2.97 1,512.19 5,907.00
NECOTIS (SRNN-256 2023-07-27) 11.03 3.66 3.66 2.75 3.17 3.61 0.00 32.00 0.20 0.01 459.78 1,796.00
NoiCE (Spiking Conv 2023-07-27) 13.15 5.53 5.53 2.80 3.22 3.64 0.08 32.08 6,110.78 194.87 2,100.22 8,209.00
Phase 3 Physics (Conv SDNN solution, 21 training epochs 2023-08-04) 13.11 5.52 5.52 2.79 3.18 3.71 0.12 32.12 52.50 1.69 497.00 1,900.00
SPANDEX (50% Sparsity SDNN 2023-08-18) 12.33 7.58 7.58 2.70 3.19 3.46 0.01 32.01 9.37 0.30 215.00 356.00
SPANDEX (75% Sparsity SDNN 2023-08-18) 11.90 7.58 7.58 2.69 3.25 3.30 0.01 32.01 6.04 0.19 108.00 182.00
Siliron (ARG-ABS SDNN solution 2023-08-18) 9.16 1.60 1.60 2.57 3.22 3.02 0.01 8.03 1.21 0.09 33.00 77.20
XTeam (CTDNN_LARGE 2023-08-03) 15.55 9.14 9.14 3.11 3.42 3.98 0.05 32.06 262.87 2.12 1,901.82 7,607.00
XTeam (CTDNN_LAVADL 2023-08-15) 14.00 7.59 7.59 3.02 3.38 3.84 0.00 32.00 61.37 0.49 904.80 3,619.18
XTeam (CTDNN_MIDDLE 2023-08-03) 14.47 8.06 8.06 2.99 3.36 3.83 0.05 32.67 224.64 1.95 1,605.50 6,422.00
XTeam (XNN 2023-08-04) 11.59 5.18 5.18 2.79 3.30 3.45 0.00 32.00 82.08 0.66 3,676.17 14,704.00
jiaxingdns (spikingdns 2023-08-18) 14.11 6.49 6.49 2.77 3.16 3.65 0.01 8.01 793.00
Microsoft NsNet2 (02/20/2023) 11.89 4.26 4.26 2.95 3.27 3.94 0.024 20.024 136.13 2.72 2,681 10,500
Intel proprietary DNS (02/28/2023) 12.71 5.09 5.09 3.09 3.35 4.08 0.030 32.030 - - 1,901 3,802
Baseline SDNN solution (02/20/2023) 12.50 4.88 4.88 2.71 3.21 3.46 0.030 32.030 14.54 0.46 525 465
Validation set 7.62 - - 2.45 3.19 2.72 - - - - - -

Track 2

Entry $\text{SI-SNR}$ (dB) $\text{SI-SNRi}$ data (dB) $\text{SI-SNRi}$ enc+dec (dB) $\text{MOS}$ (ovrl) $\text{MOS}$ (sig) $\text{MOS}$ (bak) $\text{latency}$ enc+dec (ms) $\text{latency}$ total (ms) $\text{Power}$ (W) $\text{PDP}$ (Ws) $\text{Cores}$
Team xyz (mm/dd/yyyy)

Note:

  • An Intel committee will determine the challenge winner using a holistic evaluation (not one particular metric). We encourage challenge participants to strive for top performance in all metrics.
  • Metrics shall be taken as submitted by the participants. There will be a verification process during the contest winner evaluation.

For any additional clarifications, please refer to the challenge FAQ or Rules or ask questions in the discussions or email us at [email protected].

More Repositories

1

distiller

Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Jupyter Notebook
4,332
star
2

nlp-architect

A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
Python
2,936
star
3

coach

Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
Python
2,321
star
4

control-flag

A system to flag anomalous source code expressions by learning typical expressions from training data
C++
1,241
star
5

fastRAG

Efficient Retrieval Augmentation and Generation Framework
Python
1,194
star
6

flrc

Haskell Research Compiler
Standard ML
814
star
7

RiverTrail

An API for data parallelism in JavaScript
JavaScript
748
star
8

kAFL

A fuzzer for full VM kernel/driver targets
Makefile
636
star
9

bayesian-torch

A library for Bayesian neural network layers and uncertainty estimation in Deep Learning extending the core of PyTorch
Python
503
star
10

academic-budget-bert

Repository containing code for "How to Train BERT with an Academic Budget" paper
Python
308
star
11

ParallelAccelerator.jl

The ParallelAccelerator package, part of the High Performance Scripting project at Intel Labs
Julia
294
star
12

RAGFoundry

Framework for enhancing LLMs for RAG tasks using fine-tuning.
Python
289
star
13

SkimCaffe

Caffe for Sparse Convolutional Neural Network
C++
238
star
14

pWord2Vec

Parallelizing word2vec in shared and distributed memory
C++
191
star
15

causality-lab

Causal discovery algorithms and tools for implementing new ones
Jupyter Notebook
167
star
16

matsciml

Open MatSci ML Toolkit is a framework for prototyping and scaling out deep learning models for materials discovery supporting widely used materials science datasets, and built on top of PyTorch Lightning, the Deep Graph Library, and PyTorch Geometric.
Python
143
star
17

riscv-vector

Vector Acceleration IP core for RISC-V*
Scala
136
star
18

Model-Compression-Research-Package

A library for researching neural networks compression and acceleration methods.
Python
134
star
19

MMPano

Official implementation of L-MAGIC
Python
123
star
20

rnnlm

Recurrent Neural Network Language Modeling (RNNLM) Toolkit
C++
121
star
21

HPAT.jl

High Performance Analytics Toolkit (HPAT) is a Julia-based framework for big data analytics on clusters.
Julia
120
star
22

FP8-Emulation-Toolkit

PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.
Python
90
star
23

ScalableVectorSearch

C++
88
star
24

VL-InterpreT

Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers
Python
84
star
25

vdms

VDMS: Your Favorite Visual Data Management System
C++
82
star
26

SpMP

sparse matrix pre-processing library
C++
81
star
27

SLIDE_opt_ia

C++
74
star
28

CLNeRF

Python
63
star
29

baa-ngp

This repository contains the official Implementation for "BAA-NGP: Bundle-Adjusting Accelerated Neural Graphics Primitives".
Python
56
star
30

autonomousmavs

Framework for Autonomous Navigation of Micro Aerial Vehicles
C++
56
star
31

multimodal_cognitive_ai

research work on multimodal cognitive ai
Python
56
star
32

Latte.jl

A high-performance DSL for deep neural networks in Julia
Julia
53
star
33

AVUC

Code to accompany the paper 'Improving model calibration with accuracy versus uncertainty optimization'.
Python
51
star
34

GraVi-T

Graph learning framework for long-term video understanding
Python
49
star
35

PreSiFuzz

Pre-Silicon Hardware Fuzzing Toolkit
Rust
47
star
36

pmgd

Persistent Memory Graph Database
C++
43
star
37

TSAD-Evaluator

Intel Labs open source repository for time series anomaly detection evaluator
C++
41
star
38

Open-Omics-Acceleration-Framework

Intel lab's open sourced data science framework for accelerating digital biology
Jupyter Notebook
36
star
39

Auto-Steer

Auto-Steer
Python
36
star
40

FloorSet

Jupyter Notebook
34
star
41

SAR

Python
34
star
42

kafl.fuzzer

kAFL Fuzzer
Python
32
star
43

CompilerTools.jl

The CompilerTools package, part of the High Performance Scripting project at Intel Labs
Julia
30
star
44

TinyGarble2.0

C++
29
star
45

t2sp

Productive and portable performance programming across spatial architectures (FPGAs, etc.) and vector architectures (GPUs, etc.)
C++
29
star
46

DyNAS-T

Dynamic Neural Architecture Search Toolkit
Jupyter Notebook
28
star
47

ParallelJavaScript

A collection of example workloads for Parallel JavaScript
HTML
26
star
48

kafl.targets

Target components for kAFL/Nyx Fuzzer
C
25
star
49

continuallearning

Python
25
star
50

iHRC

Intel Heterogeneous Research Compiler (iHRC)
C++
25
star
51

scenario_execution

Scenario Execution for Robotics
Python
25
star
52

flrc-lib

Pillar compiler, Pillar runtime, garbage collector.
C++
23
star
53

lvlm-interpret

Python
23
star
54

iACT

C++
22
star
55

OSCAR

Object Sensing and Cognition for Adversarial Robustness
Jupyter Notebook
20
star
56

MICSAS

MISIM: A Neural Code Semantics Similarity System Using the Context-Aware Semantics Structure
Python
19
star
57

mat2qubit

Python
19
star
58

csg

IV 2020 "CSG: Critical Scenario Generation from Real Traffic Accidents"
Python
18
star
59

Sparso

Julia package for accelerating sparse matrix applications.
Julia
18
star
60

open-omics-alphafold

Python
17
star
61

MART

Modular Adversarial Robustness Toolkit
Python
16
star
62

Trans-Omics-Acceleration-Library

HTML
15
star
63

Hardware-Aware-Automated-Machine-Learning

Jupyter Notebook
15
star
64

kafl.linux

Linux kernel branches for confidential compute research
15
star
65

c3-simulator

C3-Simulator is a Simics-based functional simulator for the X86 C3 processor, including library and kernel support for pointer and data encryption, stack unwinding support for C++ exception handling, debugger enabling, and scripting for running tests.
C++
14
star
66

VectorSearchDatasets

Python
11
star
67

flrc-benchmarks

Benchmarks for use with IntelLabs/flrc.
Haskell
10
star
68

ais-benchmarks

A framework, based on python and numpy, for evaluation of sampling methods
Python
10
star
69

ALTO

A template-based implementation of the Adaptive Linearized Tensor Order (ALTO) format for storing and processing sparse tensors.
C++
10
star
70

hec-p-isa-tools

Intel’s HERACLES accelerator introduces a new set of fundamental instructions, the Polynomial Instructions Set Architecture (P-ISA) that operates directly on polynomials requiring a completely new programming environment. This open-source project aims at developing the building blocks for a compiler toolchain for HERACLES.
Python
10
star
71

PyTorchALFI

Application Level Fault Injection for Pytorch
Python
9
star
72

RiverTrail-interactive

An interactive shell in your browser for writing and running River Trail programs
JavaScript
8
star
73

gma

Linux Client & Server Software to support Generic Multi-Access Network Virtualization
C++
8
star
74

dfm

DFM (Deep Feature Modeling) is an efficient and principled method for out-of-distribution detection, novelty and anomaly detection.
Python
7
star
75

SOI_FFT

Segment-of-interest low-communication FFT algorithm
C
7
star
76

vcl

DEPRECATED - No longer maintained. Updates are will be provided through the VDMS project
C++
6
star
77

DATSA

DATSA
C++
6
star
78

Hybrid-Quantum-Classical-Library

Hybrid Quantum-Classical Library (HQCL)
C++
6
star
79

spic

Semantic Preserving Image Compression
Python
6
star
80

generative-ai

Intel Generative Image Model Benchmark
Jupyter Notebook
6
star
81

Optimized-Implementation-of-Word-Movers-Distance

C++
6
star
82

token_elimination

Python
6
star
83

NeuroCounterfactuals

Jupyter Notebook
5
star
84

c3-glibc

C
5
star
85

PolarFly

Source code repository for paper being presented at Super Computing 22 Conference.
C++
5
star
86

aspect-extraction

Pattern Based Aspect Term Extraction
Python
5
star
87

networkgym

NetworkGym is a Simulation-aaS framework to support Network AI algorithm development by providing high-fidelity full-stack e2e network simulation in cloud and allowing AI developers to interact with the simulated network environment through open APIs.
C++
5
star
88

Latte.py

Python
5
star
89

HDFIT

HDFIT (Hardware Design Fault Injection Toolkit) Github documentation pages.
5
star
90

TME-MK-Fine-Grained-Encryption-Integrity

Makefile
5
star
91

EquiTriton

EquiTriton is a project that seeks to implement high-performance kernels for commonly used building blocks in equivariant neural networks, enabling compute efficient training and inference.
Python
4
star
92

Incremental-Neural-Videos-with-PyTorch

Incremental-Neural-Videos-with-PyTorch*
Python
4
star
93

kafl.qemu

4
star
94

simics-plus-rtl

This project contains the Chisel code for a CRC32 datapath alongside a skeleton PCI component in Simics DML which connects to the C++ conversion of the CRC32 datapath.
Scala
4
star
95

Chisel-cocotb-Examples

This project contains generic example hardware modules and their testbenches written in Chisel and cocotb to demonstrate an agile hardware development methodology.
Python
4
star
96

LogReplicationRocksDB

C++
4
star
97

emp-ot

C++
3
star
98

kafl.libxdc

C
3
star
99

kafl.actions

Github actions for KAFL
Python
3
star
100

emp-tool

C++
3
star