• Stars
    star
    168
  • Rank 225,507 (Top 5 %)
  • Language
    Python
  • License
    GNU Affero Genera...
  • Created over 1 year ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Fuzzing Embedded Systems using Hardware Breakpoints

GDBFuzz: Debugger-Driven Fuzzing

This is the companion code for the paper: 'Fuzzing Embedded Systems using Debugger Interfaces'. A preprint of the paper can be found here https://publications.cispa.saarland/3950/. The code allows the users to reproduce and extend the results reported in the paper. Please cite the above paper when reporting, reproducing or extending the results.

Folder structure

.
    โ”œโ”€โ”€ benchmark               # Scripts to build Google's fuzzer test suite and run experiments
    โ”œโ”€โ”€ dependencies            # Contains a Makefile to install dependencies for GDBFuzz
    โ”œโ”€โ”€ evaluation              # Raw exeriment data, presented in the paper
    โ”œโ”€โ”€ example_firmware        # Embedded example applications, used for the evaluation 
    โ”œโ”€โ”€ example_programs        # Contains a compiled example program and configs to test GDBFuzz
    โ”œโ”€โ”€ src                     # Contains the implementation of GDBFuzz
    โ”œโ”€โ”€ Dockerfile              # For creating a Docker image with all GDBFuzz dependencies installed
    โ”œโ”€โ”€ LICENSE                 # License
    โ”œโ”€โ”€ Makefile                # Makefile for creating the docker image or install GDBFuzz locally
    โ””โ”€โ”€ README.md               # This README file

Purpose of the project

The idea of GDBFuzz is to leverage hardware breakpoints from microcontrollers as feedback for coverage-guided fuzzing. Therefore, GDB is used as a generic interface to enable broad applicability. For binary analysis of the firmware, Ghidra is used. The code contains a benchmark setup for evaluating the method. Additionally, example firmware files are included.

Getting Started

GDBFuzz enables coverage-guided fuzzing for embedded systems, but - for evaluation purposes - can also fuzz arbitrary user applications. For fuzzing on microcontrollers we recommend a local installation of GDBFuzz to be able to send fuzz data to the device under test flawlessly.

Install local

GDBFuzz has been tested on Ubuntu 20.04 LTS and Raspberry Pie OS 32-bit. Prerequisites are java and python3. First, create a new virtual environment and install all dependencies.

virtualenv .venv
source .venv/bin/activate
make
chmod a+x ./src/GDBFuzz/main.py

Run locally on an example program

GDBFuzz reads settings from a config file with the following keys.

[SUT]
# Path to the binary file of the SUT.
# This can, for example, be an .elf file or a .bin file.
binary_file_path = <path>

# Address of the root node of the CFG.
# Breakpoints are placed at nodes of this CFG.
# e.g. 'LLVMFuzzerTestOneInput' or 'main'
entrypoint = <entrypoint>

# Number of inputs that must be executed without a breakpoint hit until
# breakpoints are rotated.
until_rotate_breakpoints = <number>


# Maximum number of breakpoints that can be placed at any given time.
max_breakpoints = <number>

# Blacklist functions that shall be ignored.
# ignore_functions is a space separated list of function names e.g. 'malloc free'.
ignore_functions = <space separated list>

# One of {Hardware, QEMU, SUTRunsOnHost}
# Hardware: An external component starts a gdb server and GDBFuzz can connect to this gdb server.
# QEMU: GDBFuzz starts QEMU. QEMU emulates binary_file_path and starts gdbserver.
# SUTRunsOnHost: GDBFuzz start the target program within GDB.
target_mode = <mode>

# Set this to False if you want to start ghidra, analyze the SUT,
# and start the ghidra bridge server manually.
start_ghidra = True


# Space separated list of addresses where software breakpoints (for error
# handling code) are set. Execution of those is considered a crash.
# Example: software_breakpoint_addresses = 0x123 0x432
software_breakpoint_addresses = 


# Whether all triggered software breakpoints are considered as crash
consider_sw_breakpoint_as_error = False

[SUTConnection]
# The class 'SUT_connection_class' in file 'SUT_connection_path' implements
# how inputs are sent to the SUT.
# Inputs can, for example, be sent over Wi-Fi, Serial, Bluetooth, ...
# This class must inherit from ./connections/SUTConnection.py.
# See ./connections/SUTConnection.py for more information.
SUT_connection_file = FIFOConnection.py

[GDB]
path_to_gdb = gdb-multiarch
#Written in address:port
gdb_server_address = localhost:4242

[Fuzzer]
# In Bytes
maximum_input_length = 100000
# In seconds
single_run_timeout = 20
# In seconds
total_runtime = 3600

# Optional
# Path to a directory where each file contains one seed. If you don't want to
# use seeds, leave the value empty.
seeds_directory = 

[BreakpointStrategy]
# Strategies to choose basic blocks are located in 
# 'src/GDBFuzz/breakpoint_strategies/'
# For the paper we use the following strategies
# 'RandomBasicBlockStrategy.py' - Randomly choosing unreached basic blocks
# 'RandomBasicBlockNoDomStrategy.py' - Like previous, but doesn't use dominance relations to derive transitively reached nodes.
# 'RandomBasicBlockNoCorpusStrategy.py' - Like first, but prevents growing the input corpus and therefore behaves like blackbox fuzzing with coverage measurement.
# 'BlackboxStrategy.py', - Doesn't set any breakpoints
breakpoint_strategy_file = RandomBasicBlockStrategy.py

[Dependencies]
path_to_qemu = dependencies/qemu/build/x86_64-linux-user/qemu-x86_64
path_to_ghidra = dependencies/ghidra


[LogsAndVisualizations]
# One of {DEBUG, INFO, WARNING, ERROR, CRITICAL}
loglevel = INFO

# Path to a directory where output files (e.g. graphs, logfiles) are stored.
output_directory = ./output

# If set to True, an MQTT client sends UI elements (e.g. graphs)
enable_UI = False

An example config file is located in ./example_programs/ together with an example program that was compiled using our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common/. Start fuzzing for one hour with the following command.

chmod a+x ./example_programs/json-2017-02-12
./src/GDBFuzz/main.py --config ./example_programs/fuzz_json.cfg

We first see output from Ghidra analyzing the binary executable and susequently messages when breakpoints are relocated or hit.

Fuzzing Output

Depending on the specified output_directory in the config file, there should now be a folder trial-0 with the following structure

.
    โ”œโ”€โ”€ corpus            # A folder that contains the input corpus.
    โ”œโ”€โ”€ crashes           # A folder that contains crashing inputs - if any.
    โ”œโ”€โ”€ cfg               # The control flow graph as adjacency list.
    โ”œโ”€โ”€ fuzzer_stats      # Statistics of the fuzzing campaign.
    โ”œโ”€โ”€ plot_data         # Table showing at which relative time in the fuzzing campaign which basic block was reached.
    โ”œโ”€โ”€ reverse_cfg       # The reverse control flow graph.

Using Ghidra in GUI mode

By setting start_ghidra = False in the config file, GDBFuzz connects to a Ghidra instance running in GUI mode. Therefore, the ghidra_bridge plugin needs to be started manually from the script manager. During fuzzing, reached program blocks are highlighted in green.

GDBFuzz on Linux user programs

For fuzzing on Linux user applications, GDBFuzz leverages the standard LLVMFuzzOneInput entrypoint that is used by almost all fuzzers like AFL, AFL++, libFuzzer,.... In benchmark/benchSUTs/GDBFuzz_wrapper/common There is a wrapper that can be used to compile any compliant fuzz harness into a standalone program that fetches input via a named pipe at /tmp/fromGDBFuzz. This allows to simulate an embedded device that consumes data via a well defined input interface and therefore run GDBFuzz on any application. For convenience we created a script in benchmark/benchSUTs that compiles all programs from our evaluation with our wrapper as explained later.

NOTE: GDBFuzz is not intended to fuzz Linux user applications. Use AFL++ or other fuzzers therefore. The wrapper just exists for evaluation purposes to enable running benchmarks and comparisons on a scale!

Install and run in a Docker container

The general effectiveness of our approach is shown in a large scale benchmark deployed as docker containers.

make dockerimage

To run the above experiment in the docker container (for one hour as specified in the config file), map the example_programsand output folder as volumes and start GDBFuzz as follows.

chmod a+x ./example_programs/json-2017-02-12
docker run -it --env CONFIG_FILE=/example_programs/fuzz_json_docker_qemu.cfg -v $(pwd)/example_programs:/example_programs -v $(pwd)/output:/output gdbfuzz:1.0

An output folder should appear in the current working directory with the structure explained above.

Detailed Instructions

Our evaluation is split in two parts.

  1. GDBFuzz on its intended setup, directly on the hardware.
  2. GDBFuzz in an emulated environment to allow independend analysis and comparisons of the results.

GDBFuzz can work with any GDB server and therefore most debug probes for microcontrollers.

GDBFuzz vs. Blackbox (RQ1)

Regarding RQ1 from the paper, we execute GDBFuzz on different microcontrollers with different firmwares located in example_firmware. For each experiment we run GDBFuzz with the RandomBasicBlock and with the RandomBasicBlockNoCorpus strategy. The latter behaves like fuzzing without feedback, but we can still measure the achieved coverage. For answering RQ1, we compare the achieved coverage of the RandomBasicBlock and the RandomBasicBlockNoCorpus strategy. Respective config files are in the corresponding subfolders and we now explain how to setup fuzzing on the four development boards.

GDBFuzz on STM32 B-L4S5I-IOT01A board

GDBFuzz requires access to a GDB Server. In this case the B-L4S5I-IOT01A and its on-board debugger are used. This on-board debugger sets up a GDB server via the 'st-util' program, and enables access to this GDB server via localhost:4242.

  • Install the STLINK driver link
  • Connect MCU board and PC via USB (on MCU board, connect to the USB connector that is labeled as 'USB STLINK')
sudo apt-get install stlink-tools gdb-multiarch

Build and flash a firmware for the STM32 B-L4S5I-IOT01A, for example the arduinojson project.

Prerequisite: Install platformio (pio)

cd ./example_firmware/stm32_disco_arduinojson/
pio run --target upload

For your info: platformio stored an .elf file of the SUT here: ./example_firmware/stm32_disco_arduinojson/.pio/build/disco_l4s5i_iot01a/firmware.elf This .elf file is also later used in the user configuration for Ghidra.

Start a new terminal, and run the following to start the a GDB Server:

st-util

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyACM0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyACM0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/stm32_disco_arduinojson/fuzz_serial_json.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on the CY8CKIT-062-WiFi-BT board

Install pyocd:

pip install --upgrade pip 'mbed-ls>=1.7.1' 'pyocd>=0.16'

Make sure that 'KitProg v3' is on the device and put Board into 'Arm DAPLink' Mode by pressing the appropriate button. Start the GDB server:

pyocd gdbserver --persist

Flash a firmware and start fuzzing e.g. with

gdb-multiarch
    target remote :3333
    load ./example_firmware/CY8CKIT_json/mtb-example-psoc6-uart-transmit-receive.elf
    monitor reset
./src/GDBFuzz/main.py --config ./example_firmware/CY8CKIT_json/fuzz_serial_json.cfg

GDBFuzz on ESP32 and Segger J-Link

Build and flash a firmware for the ESP32, for instance the arduinojson example with platformio.

cd ./example_firmware/esp32_arduinojson/
pio run --target upload

Add following line to the openocd config file for the J-Link debugger: jlink.cfg

adapter speed 10000

Start a new terminal, and run the following to start the GDB Server:

get_idf
openocd -f interface/jlink.cfg -f target/esp32.cfg -c "telnet_port 7777" -c "gdb_port 8888"

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyUSB0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyUSB0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/esp32_arduinojson/fuzz_serial.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on MSP430F5529LP

Install TI MSP430 GCC from https://www.ti.com/tool/MSP430-GCC-OPENSOURCE

Start GDB Server

./gdb_agent_console libmsp430.so

or (more stable). Build mspdebug from https://github.com/dlbeer/mspdebug/ and use:

until mspdebug --fet-skip-close --force-reset tilib "opt gdb_loop True" gdb ; do sleep 1 ; done

Ghidra fails to analyze binaries for the TI MSP430 controller out of the box. To fix that, we import the file in the Ghidra GUI, choose MSP430X as architecture and skip the auto analysis. Next, we open the 'Symbol Table', sort them by name and delete all symbols with names like $C$L*. Now the auto analysis can be executed. After analysis, start the ghidra bridge from the Ghidra GUI manually and then start GDBFuzz.

./src/GDBFuzz/main.py --config ./example_firmware/msp430_arduinojson/fuzz_serial.cfg

USB Fuzzing

To access USB devices as non-root user with pyusb we add appropriate rules to udev. Paste following lines to /etc/udev/rules.d/50-myusb.rules:

SUBSYSTEM=="usb", ATTRS{idVendor}=="1234", ATTRS{idProduct}=="5678" GROUP="usbusers", MODE="666"

Reload udev:

sudo udevadm control --reload
sudo udevadm trigger

Compare against Fuzzware (RQ2)

In RQ2 from the paper, we compare GDBFuzz against the emulation based approach Fuzzware. First we execute GDBFuzz and Fuzzware as described previously on the shipped firmware files. For each GDBFuzz experiment, we create a file with valid basic blocks from the control flow graph files as follows:

cut -d " " -f1 ./cfg > valid_bbs.txt

Now we can replay coverage against fuzzware result fuzzware genstats --valid-bb-file valid_bbs.txt

Finding Bugs (RQ3)

When crashing or hanging inputs are found, the are stored in the crashes folder. During evaluation, we found the following three bugs:

  1. An infinite loop in the STM32 USB device stack, caused by counting a uint8_t index variable to an attacker controllable uint32_t variable within a for loop.
  2. A buffer overflow in the Cypress JSON parser, caused by missing length checks on a fixed size internal buffer.
  3. A null pointer dereference in the Cypress JSON parser, caused by missing validation checks.

GDBFuzz on an Raspberry Pi 4a (8Gb)

GDBFuzz can also run on a Raspberry Pi host with slight modifications:

  1. Ghidra must be modified, such that it runs on an 32-Bit OS

In file ./dependencies/ghidra/support/launch.sh:125 The JAVA_HOME variable must be hardcoded therefore e.g. to JAVA_HOME="/usr/lib/jvm/default-java"

  1. STLink must be at version >= 1.7 to work properly -> Build from sources

GDBFuzz on other boards

To fuzz software on other boards, GDBFuzz requires

  1. A microcontroller with hardware breakpoints and a GDB compliant debug probe
  2. The firmware file.
  3. A running GDBServer and suitable GDB application.
  4. An entry point, where fuzzing should start e.g. a parser function or an address
  5. An input interface (see src/GDBFuzz/connections) that triggers execution of the code at the entry point e.g. serial connection

All these properties need to be specified in the config file.

Run the full Benchmark (RQ4 - 8)

For RQ's 4 - 8 we run a large scale benchmark. First, build the Docker image as described previously and compile applications from Google's Fuzzer Test Suite with our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common.

cd ./benchmark/benchSUTs
chmod a+x setup_benchmark_SUTs.py
make dockerbenchmarkimage

Next adopt the benchmark settings in benchmark/scripts/benchmark.py and benchmark/scripts/benchmark_aflpp.py to your demands (especially number_of_cores, trials, and seconds_per_trial) and start the benchmark with:

cd ./benchmark/scripts
./benchmark.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
./benchmark_aflpp.py $(pwd)/../benchSUTs/SUTs/ SUTs.json

A folder appears in ./benchmark/scripts that contains plot files (coverage over time), fuzzer statistic files, and control flow graph files for each experiment as in evaluation/fuzzer_test_suite_qemu_runs.

[Optional] Install Visualization and Visualization Example

GDBFuzz has an optional feature where it plots the control flow graph of covered nodes. This is disabled by default. You can enable it by following the instructions of this section and setting 'enable_UI' to 'True' in the user configuration.

On the host:

Install

sudo apt-get install graphviz

Install a recent version of node, for example Option 2 from here. Use Option 2 and not option 1. This should install both node and npm. For reference, our version numbers are (but newer versions should work too):

โžœ node --version
v16.9.1
โžœ npm --version
7.21.1

Install web UI dependencies:

cd ./src/webui
npm install

Install mosquitto MQTT broker, e.g. see here

Update the mosquitto broker config: Replace the file /etc/mosquitto/conf.d/mosquitto.conf with the following content:

listener 1883
allow_anonymous true

listener 9001
protocol websockets

Restart the mosquitto broker:

sudo service mosquitto restart

Check that the mosquitto broker is running:

sudo service mosquitto status

The output should include the text 'Active: active (running)'

Start the web UI:

cd ./src/webui
npm start

Your web browser should open automatically on 'http://localhost:3000/'.

Start GDBFuzz and use a user config file where enable_UI is set to True. You can use the Docker container and arduinojson SUT from above. But make sure to set 'enable_UI' to 'True'.

The nodes covered in 'blue' are covered. White nodes are not covered. We only show uncovered nodes if their parent is covered (drawing the complete control flow graph takes too much time if the control flow graph is large).

License

GDBFuzz is open-sourced under the AGPL-3.0 license. See the LICENSE file for details.

For a list of other open source components included in GDBFuzz, see the file 3rd-party-licenses.txt.

More Repositories

1

torchphysics

Jupyter Notebook
333
star
2

unetgan

Official Implementation of the paper "A U-Net Based Discriminator for Generative Adversarial Networks" (CVPR 2020)
Python
330
star
3

OASIS

Official implementation of the paper "You Only Need Adversarial Supervision for Semantic Image Synthesis" (ICLR 2021)
Python
299
star
4

BCAI_kaggle_CHAMPS

Bosch solution to CHAMPS Kaggle competition
Python
125
star
5

pylife

a general library for fatigue and reliability
Python
96
star
6

multiplicative-filter-networks

Source code for Fathony, Sahu, Willmott, & Kolter, "Multiplicative Filter Networks", ICLR 2021.
Python
87
star
7

blech

Blech is a language for developing reactive, real-time critical embedded software.
F#
70
star
8

pcg_gazebo

Procedural Generation for Gazebo
Python
69
star
9

DroidCalib

Python
53
star
10

switchprompt

Resources related to EACL 2023 paper "SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains"
Python
51
star
11

Continuous-Recurrent-Units

50
star
12

CNC_Machining

data set for process monitoring on CNC machines
Jupyter Notebook
50
star
13

ALDM

Official implementation of "Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive" (ICLR 2024)
Jupyter Notebook
50
star
14

PR-SSM

Python implementation of the PR-SSM.
Python
48
star
15

remroc

This repository contains the code for the Realistic Multi-Robot Coordination (ReMRoC) Framework.
Python
48
star
16

fmi_adapter

Integrating functional mock-up units (FMUs) in ROS nodes
C++
44
star
17

pcg_gazebo_pkgs

[DEPRECATED] Procedural generation library for Gazebo (please refer to https://github.com/boschresearch/pcg_gazebo)
Python
43
star
18

Open3DSG

[CVPR 2024] Open3DSG: Open-Vocabulary 3D Scene Graphs from Point Clouds with Queryable Objects and Open-Set Relationships
Python
43
star
19

ros_license_toolkit

Checks ROS packages for correct license declaration
Python
41
star
20

hierarchical_anomaly_detection

Implementation of the paper "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and features" (NeurIPS 2020)
Python
40
star
21

LatentOE-AD

Coder of the paper 'Latent Outlier Exposure for Anomaly Detectin with Contaminated Data' published in ICML 2022
Python
37
star
22

one-shot-synthesis

Official PyTorch implementation of the paper "Generating Novel Scene Compositions from Single Images and Videos"
Python
37
star
23

ros1_lifecycle

ROS2-inspired lifecycle for ROS1
Python
35
star
24

NeuTraL-AD

Code of the paper 'Neural Transformation Learning for Anomaly Detection' published in ICML 2021
Python
35
star
25

ISSA

Official implementation of "Intra-Source Style Augmentation for Improved Domain Generalization" (WACV 2023 & IJCV)
Python
34
star
26

ros1_tracetools

Tracing tools for ROS
C++
33
star
27

Divide-and-Bind

Official implementation of "Divide & Bind Your Attention for Improved Generative Semantic Nursing" (BMVC 2023 Oral)
Jupyter Notebook
33
star
28

data-augmentation-coling2020

Code accompanying Coling2020 publication on data augmentation for named entity recognition
Python
31
star
29

UVAST

Official Implementation of the paper "Unified Fully and Timestamp Supervised Temporal Action Segmentation via Sequence to Sequence Translation" (ECCV 2022)
Python
31
star
30

HYM_notebook

The Hybrid Modeling Notebook
Jupyter Notebook
29
star
31

amira_blender_rendering

Code base for physics-based photorealistic rendering within the scope of Bosch BCAI AMIRA probject
Python
27
star
32

ros2_response_time_analysis

Supplementary source code for the ECRTS 2019 paper 'Response-Time Analysis of ROS 2 Processing Chains under Reservation-Based Scheduling'
Python
26
star
33

MetaBO

Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
Python
25
star
34

meta-adversarial-training

Tensorflow implementation of Meta Adversarial Training for Adversarial Patch Attacks on Tiny ImageNet.
Python
25
star
35

rince

This is the code accompanying the AAAI 2022 paper "Ranking Info Noise Contrastive Estimation: Boosting Contrastive Learning via Ranked Positives" https://arxiv.org/abs/2201.11736 . The method allows you to use additional ranking information for representation learning.
Python
24
star
36

blackboxopt

Blackbox optimization algorithms with a common interface, along with useful helpers like parallel optimization loops, analysis and visualization scripts.
Python
24
star
37

bt_tools

bt_tools is a collection of software for working with behavior trees (BTs) in ROS. It contains means for interpreting, visualizing and introspecting BTs.
Python
23
star
38

GraphLevel-AnomalyDetection

Code of the paper 'Raising the Bar in Graph-level Anomaly Detection' published in IJCAI-2022
22
star
39

causalAssembly

Python
21
star
40

what-matters-for-meta-learning

[CVPR 2022] What Matters For Meta-Learning Vision Regression Tasks?
Python
20
star
41

nuScenes_Knowledge_Graph

Implementation and Knowledge Graphs of the ICCV 2023 workshop paper "nuScenes Knowledge Graph - A comprehensive semantic representation of traffic scenes for trajectory prediction"
19
star
42

the-atlas-benchmark

The Atlas Benchmark offers a collection of scripts and functions for evaluating 2D trajectory predictors.
Python
17
star
43

STAAMS-Solver

Simultaneous task allocation and motion scheduling (STAAMS) solver based on constraint programming and optimization, implemented for the Robot Operating System (ROS)
Python
17
star
44

neuzzplusplus

Python
17
star
45

mrp_bench

Benchmark for Multi Robot Planning
Python
16
star
46

statestream

A toolbox to explore synchronous layerwise-parallel deep neural networks.
Python
16
star
47

sofc-exp_textmining_resources

Resources related to ACL 2020 paper "The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain"
Python
16
star
48

ube-mbrl

Model-Based Uncertainty in Value Functions (AISTATS2023)
Python
15
star
49

mlfuzz

Python
15
star
50

expclr

Official PyTorch implementation of the paper "Utilizing Expert Features for Contrastive Learning of Time-Series Representations"
14
star
51

metanas

Meta-Learning of Neural Architectures for Few-Shot Learning
Python
14
star
52

anno-ctr-lrec-coling-2024

AnnoCTR corpus for detection and linking of entities in cyber threat reports
14
star
53

GMM_DAE

Python
13
star
54

SyMFM6D

Python
13
star
55

evidential-occupancy

Official implementation of "Accurate Training Data for Occupancy Map Prediction in Automated Driving Using Evidence Theory"
13
star
56

steps-parser

Python
12
star
57

GeodesicMotionSkills

OpenEdge ABL
12
star
58

hierarchical_patent_classification_ecir2021

Python
12
star
59

local_neural_transformations

Companion code for the self-supervised anomaly detection algorithm proposed in the paper "Detecting Anomalies within Time Series using Local Neural Transformations" by Tim Schneider et al.
Python
12
star
60

adversarial_meta_embeddings

Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"
Python
11
star
61

GridSaliency-ToyDatasetGen

Code for toy dataset generation of "Grid Saliency for Context Explanations of Semantic Segmentation" (https://arxiv.org/abs/1907.13054)
Python
11
star
62

unscented-autoencoder

Accompanying code for the ICML'23 paper "Unscented Autoencoder", authored by Faris Janjos, Lars Rosenbaum, Maxim Dolgov, and J. Marius Zoellner.
Python
11
star
63

DD_OPG

Implementation prototype of the Deep Deterministic Off-Policy Gradient (DD-OPG) method.
Python
11
star
64

Hierarchies-of-Planning-and-Reinforcement-Learning-for-Robot-Navigation

Python
10
star
65

sepMultiphaseFoam

OpenFOAM based geometrical VoF for capillary multiphase flows including automated benchmarking (tests+evalutation)
10
star
66

GP_tutorial

Jupyter Notebook
10
star
67

FedTPG

Code for the ICLR 2024 paper Federated Text-driven Prompt Generation for Vision-Language Models (https://openreview.net/forum?id=NW31gAylIm)
Python
10
star
68

ExeKGLib

Python library for Executable Machine Learning Knowledge Graphs
Python
10
star
69

NoisyInputEntropySearch

This is the companion code for the paper Noisy-Input Entropy Search for Efficient Robust Bayesian Optimization by Lukas P. Frรถhlich et al., AISTATS 2020
Python
10
star
70

cuae-prediction

Accompanying code for the paper "Conditional Unscented Autoencoders for Trajectory Prediction"
Python
9
star
71

PAC_GP

Implementation of the PAC Bayesian GP learning method.
Python
9
star
72

pr-size-labeler

Add labels and comments to PRs depending on their size
JavaScript
9
star
73

VSTAR

Official implementation of "VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis"
9
star
74

trajectory_games_learning

This is the companion code for the method reported in the paper "Learning game-theoretic models of multiagent trajectories using implicit layers" published at AAAI 2021
Python
9
star
75

trust-region-layers

Official implementation of the ICLR 2021 paper "Differentiable Trust Region Layers for Deep Reinforcement Learning"
Python
9
star
76

imax-calibration

Official Implementation of "Multi-Class Uncertainty Calibration via Mutual Information Maximization-based Binning".
Python
9
star
77

image-render-setup

This is the setup and system module for the `image-render` automation system.
Python
8
star
78

robust_classification_with_detection

Source code for Sheikholeslami et al., "Provably Robust Classification of Adversarial Examples with Detection", ICLR 2021.
Python
7
star
79

stuttgart-sumo-traffic-scenario

A synthetic 24 hour traffic scenario for a 45 km section of the German highway A81 between Stuttgart Feuerbach - Heilbronn (Baden-Wรผrttemberg) based on realistic traffic data.
7
star
80

parameterspace

Define parameter spaces of mixed types like continuous, integers and categoricals along with conditionalities and priors.
Python
7
star
81

meta-rs

Companion code for the paper "Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks" by Yatsura et al.
Python
7
star
82

iGPODE

Python
7
star
83

Hydraulic-EoL-Testing

Multivariate Time Series Data usable for Time Series Segmentation and Time Series Classification. Each sample represents the multi-phased End-of-Line-Testing Cycle of one hydraulic pump (9 sensors). For confidentality reasons, the data were normalized and the sensor names anonymized.
7
star
84

MoCL-NAACL-2024

Code and resources for the NAACL 2024 paper "Rehearsal-Free Modular and Compositional Continual Learning for Language Models"
Python
6
star
85

vitemi

Material for the paper "Micromechanical fatigue experiments for validation of microstructure-sensitive fatigue simulation models".
Jupyter Notebook
6
star
86

HierarchicalPriorNetworks

Source code for Bechtold et al., "Fostering Generalization in Single-view 3D Reconstruction by Learning a Hierarchy of Local and Global Shape Priors", CVPR 2021.
Python
6
star
87

PA-GAN

Code for "Progressive Augmentation of GANs" (https://arxiv.org/abs/1901.10422)
Python
6
star
88

aries-acapy-clients

Aries aca-py clients
Java
6
star
89

pq-wolfSSL

Integration of selected post-quantum schemes into the embedded TLS library wolfSSL as part of our paper "Mixed Certificate Chains for the Transition to Post-Quantum Authentication in TLS 1.3"
C
6
star
90

transfergpbo

This will be the companion code for the benchmarking study reported in the paper Transfer Learning with Gaussian Processes for Bayesian Optimization accepted for publication at AISTATS 2022
Python
6
star
91

Bosch-Engine-Datasets

5
star
92

dskg-constructor

Python
5
star
93

clin_x

Resources related to our paper "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain"
Python
5
star
94

Amor-Struct-GP

Official implementation of "Amortized Inference for Gaussian Process Hyperparameters of Structured Kernels" (UAI 2023)
Python
5
star
95

clusterdatasplit_eval4nlp-2020

ClusterDataSplit: Exploring Challenging Clustering-Based Data Splits for Model Performance Evaluation
Jupyter Notebook
5
star
96

human-interpretation-saliency

Jupyter Notebook
5
star
97

acoustic-traffic-simulation-counting

Baseline code for DCASE 2024 Task10 and ICASSP 2024 paper
Python
5
star
98

Hyp-OW

[AAAI 2024] Official code for "Hyp-OW: Exploiting Hierarchical Structure Learning with Hyperbolic Distance Enhances Open World Object Detection"
Python
5
star
99

bayesian-context-aggregation

Source code for Volpp et al., "Bayesian Context Aggregation for Neural Processes", ICLR 2021
Python
5
star
100

numerics_independent_neural_odes

Code accompanying the ICLR 2021 paper "ResNet After All? Neural ODEs and Their Numerical Solution"
Python
5
star