• Stars
    star
    852
  • Rank 53,494 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Toolbox for quantitative trajectory evaluation of VO/VIO

rpg_trajectory_evaluation

This repository implements common used trajectory evaluation methods for visual(-inertial) odometry. Specifically, it includes

  • Different trajectory alignment methods (rigid-body, similarity and yaw-only rotation)
  • Commonly used error metrics: Absolute Trajectory Error (ATE) and Relative/Odometry Error (RE)

The relative error is implemented in the same way as in KITTI since it is the most widely used version.

Since trajectory evaluation involves many details, the toolbox is designed for easy use. It can be used to analyze a single trajectory estimate, as well as compare different algorithms on many datasets (e.g., this paper and used in IROS 2019 VIO competition) with one command. The user only needs to provide the groundtruths and estimates of desired format and specify the trajectory alignment method. The toolbox generates (almost) paper-ready plots and tables. In addition, the evaluation can be easily customized.

If you use this code in an academic context, please cite the following paper:

Zichao Zhang, Davide Scaramuzza: A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry, IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), 2018. PDF

@InProceedings{Zhang18iros,
  author = {Zhang, Zichao and Scaramuzza, Davide},
  title = {A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry},
  booktitle = {IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS)},
  year = {2018}
}
  1. Install
  2. Prepare the Data
  3. Run the Evaluation
  4. Utilities
  5. Customization: Trajectory class
  6. Credits

Install

The package is written in python and tested in Ubuntu 16.04 and 18.04. Currently only python2 is supported. The package can be used as a ROS package as well as a standalone tool. To use it as a ROS package, simply clone it into your workspace. It only depends on catkin_simple to build.

Dependencies: You will need install the following:

Prepare the Data

Each trajectory estimate (e.g., output of a visual-inertial odometry algorithm) to evaluate is organized as a self-contained folder. Each folder needs to contain at least two text files specifying the groundtruth and estimated poses with timestamps.

  • stamped_groundtruth.txt: groundtruth poses with timestamps
  • stamped_traj_estimate.txt: estimated poses with timestamps
  • (optional) eval_cfg.yaml: specify evaluation parameters
  • (optional) start_end_time.yaml: specify the start and end time (in seconds) for analysis.

For analyzing results from N runs, the estimated poses should have suffixes 0 to N-1. You can see the folders under results for examples. These files contains all the essential information to reproduce quantitative trajectory evaluation results with the toolbox.

Poses

The groundtruth (stamped_groundtruth.txt) and estimated poses (stamped_traj_estimate.txt) are specified in the following format

# timestamp tx ty tz qx qy qz qw
1.403636580013555527e+09 1.258278699999999979e-02 -1.561510199999999963e-03 -4.015300900000000339e-02 -5.131151899999999988e-02 -8.092916900000000080e-01 8.562779200000000248e-04 5.851609599999999523e-01
......

Note that the file is space separated, and the quaternion has the w component at the end. The timestamps are in the unit of second and used to establish temporal correspondences.

There are some scripts under scripts/dataset_tools to help you convert your data format (EuRoC style, ROS bag) to the above format. See the corresponding section below for details.

Evaluation parameters

Currently eval_cfg.yaml specifies two parameters for trajectory alignment (used in absolute errors):

  • align_type:
    • sim3: a similarity transformation (for vision-only monocular case)
    • se3: a rigid body transformation (for vision-only stereo case)
    • posyaw: a translation plus a rotation around gravity (for visual-inertial case)
    • none: do not align the trajectory
  • align_num_frames: the number of poses (starting from the beginning) that will be used in the trajectory alignment. -1 means all poses will be used.

If this file does not exist, trajectory alignment will be done using sim3 and all the poses.

Start and end times

start_end_time.yaml can specify the following (according to groundtruth time):

  • start_time_sec: only poses after this time will be used for analysis
  • end_time_sec: only poses before this time will be used for analysis

If this file does not exist, analysis be done for all the poses in stamped_traj_estimate.txt.

Run the Evaluation

We can run the evaluation on a single estimate result or for multiple algorithms and datasets.

Single trajectory estimate

As a ROS package, run

rosrun rpg_trajectory_evaluation analyze_trajectory_single.py <result_folder>

or as a standalone package, run

python2 analyze_trajectory_single.py <result_folder> 

<result_folder> should contain the groundtruth, trajectory estimate and optionally the evaluation configuration as mentioned above.

Output

After the evaluation is done, you will find two folders under <result_folder>:

  • saved_results/traj_est: text files that contains the statistics of different errors
    • absolute_err_statistics_<align_type>_<align_frames>.yaml: the statistics of the absolute error using the specified alignment.
    • relative_error_statistics_<len>.yaml: the statistics of the relative error calculated using the sub-trajectories of length <len>.
    • cached_rel_err.pickle: since the relative error is time consuming to compute, we cache the relative error for different sub-trajectory lengths so that we can directly use them next time.
  • plots: plots of absolute errors, relative (odometry) errors and the trajectories.

For multiple trials, the result for trial n will have the corresponding suffix, and the statistics of multiple trials will be summarized in files with the name mt_*.

As an example, after executing:

python2 scripts/analyze_trajectory_single.py results/euroc_mono_stereo/laptop/vio_mono/laptop_vio_mono_MH_05

you will find the following in plots:

top_traj abs_err rel_traj

Parameters

  • --recalculate_errors: will remove the error cache file mentioned above and re-calculate everything. Default: False.
  • --png: save plots as png instead of pdf. Default: False
  • --mul_trials: will analyze n runs. In the case of n > 1, the estimate files should end with a number suffix (e.g., stamped_traj_estimate0.txt). Default: None

Advanced: Different estimation type

Sometimes, a SLAM algorithm outputs different types of trajectories, such as real-time poses and optimized keyframe poses (e.g., pose graph, bundle adjustment). By specifying the estimation type (at the end of the command line), you can ask the script to analyze different files, for example

  • --est_type traj_est: analyze stamped_traj_estimate.txt
  • --est_type pose_graph: analyze stamped_pose_graph_estimate.txt
  • --est_type traj_est pose_graph: analyze both. In this case you can find the results in corresponding sub-directories in saved_results and plots.

The mapping from the est_type to file names (i.e., stamped_*.txt) is defined in scripts/fn_constants.py. This is also used for analyzing multiple trajectories. You can find an example in results/euroc_vislam_mono for comparing real-time poses and bundle adjustment estimates.

Multiple trajectory estimates

Similar to the case of single trajectory evaluation, for ROS, run

rosrun rpg_trajectory_evaluation analyze_trajectories.py \
  euroc_vislam_mono.yaml --output_dir=./results/euroc_vislam_mono --results_dir=./results/euroc_vislam_mono --platform laptop --odometry_error_per_dataset --plot_trajectories --rmse_table --rmse_boxplot --mul_trials=10

otherwise, run

python2 scripts/analyze_trajectories.py \
  euroc_vislam_mono.yaml --output_dir=./results/euroc_vislam_mono --results_dir=./results/euroc_vislam_mono --platform laptop --odometry_error_per_dataset --plot_trajectories --rmse_table --rmse_boxplot --mul_trials=10

These commands will look for <platform> folder under results_dir and analyze the algorithms and datasets combinations specified in analyze_trajectories.py, as described below.

Datasets organization

The datasets under results are organized as

<platform>
β”œβ”€β”€ <alg1>
β”‚Β Β  β”œβ”€β”€ <platform>_<alg1>_<dataset1>
β”‚Β Β  β”œβ”€β”€ <platform>_<alg1>_<dataset2>
β”‚Β Β  └── ......
└── <alg2>
β”‚Β Β  β”œβ”€β”€ <platform>_<alg2>_<dataset1>
β”‚Β Β  β”œβ”€β”€ <platform>_<alg2>_<dataset2>
    β”œβ”€β”€ ......
......

Each sub-folder is of the same format as mentioned above. You need to specify the algorithms and datasets to analyze for the script analyze_trajectories.py. We use a configuration file under scripts/analyze_trajectories_config to sepcify the details. For example, in euroc_vio_mono_stereo.yaml

Datasets:
  MH_01:        ---> dataset name
    label: MH01 ---> plot label for the dataset
  MH_03:
    label: MH03
  MH_05:
    label: MH05
  V2_01:
    label: V201
  V2_02:
    label: V202
  V2_03:
    label: V203
Algorithms:
  vio_mono:         ---> algorithm name
    fn: traj_est    ---> estimation type to find the correct file name
    label: vio_mono ---> plot label for the algorithm
  vio_stereo: 
    fn: traj_est
    label: vio_stereo
RelDistances: []   ---> used to specify the sub-trajectory lengths in the relative error, see below.
RelDistancePercentages: []

will analyze the following folders

β”œβ”€β”€ vio_mono
β”‚Β Β  β”œβ”€β”€ laptop_vio_mono_MH_01
β”‚Β Β  β”œβ”€β”€ laptop_vio_mono_MH_03
β”‚Β Β  β”œβ”€β”€ laptop_vio_mono_MH_05
β”‚Β Β  β”œβ”€β”€ laptop_vio_mono_V2_01
β”‚Β Β  β”œβ”€β”€ laptop_vio_mono_V2_02
β”‚Β Β  └── laptop_vio_mono_V2_03
└── vio_stereo
    β”œβ”€β”€ laptop_vio_stereo_MH_01
    β”œβ”€β”€ laptop_vio_stereo_MH_03
    β”œβ”€β”€ laptop_vio_stereo_MH_05
    β”œβ”€β”€ laptop_vio_stereo_V2_01
    β”œβ”€β”€ laptop_vio_stereo_V2_02
    └── laptop_vio_stereo_V2_03
Specifying sub-trajectory lengths for relative pose errors

The relative pose error is calculated from subtrajectories of different lengths, which can be specified by the following fields in the configuration file

  • RelDistances: a set of lengths that will be used for all the datasets
  • RelDistancePercentages: the lengths will be selected independently as certain percentages of the total length for each dataset. This will be overwritten by RelDistances. If none of the above is specified, the default percentages (see src/trajectory.py) will be used.

Output

The evaluation process will generate the saved_results folder in each result folder, same as the evaluation for single trajectory estimate. In addition, it will generate plots and text files under results folder comparing different algorithms:

  • Under <platform>_<dataset>_results folder you can find the trajectory top/side views and the boxplots of the relative pose errors on this dataset.
  • all_translation/rotation_rmse.pdf: boxplots of the RMSE of multiple runs for all datasets.
  • <platform>_translation_rmse<algorithm_dataset_string>.txt: Latex table summarizing the RMSE of all configurations. <algorithm_dataset_string> is an identifier generated from the algorithms and datasets evaluated. The values in the table are formated as mean, median (min, max) from the errors of multiple trials.

The tables can be readily used in Latex files.

Several example plots (from analyze_trajectories_config/euroc_vislam_mono.yaml) comparing the performance of different algorithms are

overall_top_traj_mh03 overall_rel_err_mh03

Parameters

Configuration:

  • config configuration file under scripts/analyze_trajectories_config

Paths:

  • --results_dir: the folder where the <platform> folder will be found. Default: results folder in the toolbox folder.
  • --output_dir: the folder for all the plots and text files. Default: results folder in the toolbox folder.
  • --platform: the folder of results to be found under the <results_dir>. Default: laptop

Analysis options:

  • --mul_trials: how many trials we want to analyze. Default: None. If some algorithm-dataset configuration has less runs, only the available ones will be considered.

  • --odometry_error_per_dataset: whether to compute the relative error for each dataset. Default: False.

  • --overall_odometry_error: whether to compute and compare the overall odometry error on all datasets for different algorithms. Default: False.

  • --rmse_table: whether to generate the table of translation RMSE (absolute error). Default: False.

    • --rmse_table_median_only: per default, the --rmse_table option saves median/mean/min/max of multiple runs. Use this option to only save the median.
    • --rmse_boxplot: whether to plot boxplot for RMSE of different datasets (only valid for analysing results from multiple trials). Default: False.
  • --plot_trajectories: whether to plot trajectories. Default: False. By default, many plots are generated, and some of them can be turned off by the following options.

    • --no_plot_side: do not plot side views
    • --no_plot_aligned: do not plot the alignment connection between the groundtruth and estimate
    • --no_plot_traj_per_alg: do not generate plots for each algorithm

Misc:

  • --recalculate_errors: whether to clear the cache and recalculate everything. Default: False.
  • --png: save plots as png instead of pdf. Default: False
  • --dpi: allow saving at a higher dpi. Default: 300
  • --no_sort_names: do not sort the names of datasets and algorithms (using the order in the configuration file) when plotting/writing the results. Default: names will be sorted.

Utilities

Dataset tools

Under scripts/dataset_tools, we provide several scripts to prepare your dataset for analysis. Specifically:

  • asl_groundtruth_to_pose.py: convert EuRoC style format to the format used in this toolbox.
  • bag_to_pose.py: extract PoseStamped/PoseWithCovarianceStamped in a ROS bag to the desired format.
  • transform_trajectory.py: transformed a pose file of our format by a given transformation, useful for applying hand-eye calibration to groundtruth/estimate before analysis.

Misc scripts

Under scripts, there are also some scripts for conveniece:

  • recursive_clean_results_dir.py: remove all saved_results directories recursively.
  • change_eval_cfg_recursive.py: recursively changing the evaluation parameter within a given result folder.

Customization

Most of the error computing is done via the class Trajectory (src/rpg_trajectory_evaluation/trajectory.py). If you would like to customize your evaluation, you can use this class directly. The API of this class is quite simple

# init with the result folder.
# You can also specify the subtrajecotry lengths and alignment parameters in the initialization.
traj = Trajectory(result_dir)

# compute the absolute error
traj.compute_absolute_error()

# compute the relative errors at `subtraj_lengths`.
traj.compute_relative_errors(subtraj_lengths)

# compute the relative error at sub-trajectory lengths computed from the whole trajectory length.
traj.compute_relative_errors()

# save the relative error to `cached_rel_err.pickle`
traj.cache_current_error()

# write the error statistics to yaml files
traj.write_errors_to_yaml()

# static method to remove the cached error from a result folder
Trajectory.remove_cached_error(result_dir)

After the error is computed, the absolute and relative errors are stored in the trajectory object and can be accessed afterwards:

  • absolute error: traj.abs_errors. It is a dictionary stores the absolute errors of all poses as well as their statistics. See Trajectory.compute_absolute_error function for details.

  • relative error: traj.rel_errors. It is a dictionary stores the relative errors of different distances (using the distance as the key). See Trajectory.compute_relative_error_at_subtraj_len function for details.

With the interface, it should be easy to access all the computed errors for customized analysis.

Credits

See package.yaml for the list of authors that have contributed to this toolbox.

It might happen that some open-source code is incorporated into the toolbox but we missed the license/copyright information. If you recognize such a situation, please open an issue.

Acknowledgements

The development of this toolbox was supported by the the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research (NCCR) Robotics, the SNSF-ERC Starting Grant, and the DARPA FLA program.

More Repositories

1

event-based_vision_resources

2,212
star
2

rpg_svo

Semi-direct Visual Odometry
C++
2,013
star
3

rpg_svo_pro_open

C++
1,125
star
4

flightmare

An Open Flexible Quadrotor Simulator
C++
756
star
5

agile_autonomy

Repository Containing the Code associated with the Paper: "Learning High-Speed Flight in the Wild"
C++
577
star
6

rpg_timelens

Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation
Python
566
star
7

rpg_quadrotor_control

Quadrotor control framework developed by the Robotics and Perception Group
C++
494
star
8

rpg_open_remode

This repository contains an implementation of REMODE (REgularized MOnocular Depth Estimation), as described in the paper.
C++
480
star
9

rpg_esim

ESIM: an Open Event Camera Simulator
C
476
star
10

agilicious

Agile flight done right!
TeX
424
star
11

vilib

CUDA Visual Library by RPG
C++
399
star
12

rpg_public_dronet

Code for the paper Dronet: Learning to Fly by Driving
Python
395
star
13

high_mpc

Policy Search for Model Predictive Control with Application to Agile Drone Flight
C
317
star
14

rpg_dvs_ros

ROS packages for DVS
C++
293
star
15

rpg_e2vid

Code for the paper "High Speed and High Dynamic Range Video with an Event Camera" (T-PAMI, 2019).
Python
275
star
16

dslam_open

Public code for "Data-Efficient Decentralized Visual SLAM"
MATLAB
270
star
17

rpg_svo_example

Example node to use the SVO Installation.
C++
268
star
18

rpg_mpc

Model Predictive Control for Quadrotors with extension to Perception-Aware MPC
C
248
star
19

rpg_vid2e

Open source implementation of CVPR 2020 "Video to Events: Recycling Video Dataset for Event Cameras"
Python
235
star
20

netvlad_tf_open

Tensorflow port of https://github.com/Relja/netvlad
Python
225
star
21

rpg_ultimate_slam_open

Open source code for "Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios" RA-L 2018
C++
225
star
22

deep_drone_acrobatics

Code for the project Deep Drone Acrobatics.
Python
178
star
23

rpg_information_field

Information Field for Perception-aware Planning
C++
170
star
24

data_driven_mpc

Python
165
star
25

rpg_vision-based_slam

This repo contains the code of the paper "Continuous-Time vs. Discrete-Time Vision-based SLAM: A Comparative Study", RA-L 2022.
C++
163
star
26

vimo

Visual-Inertial Model-based State and External Forces Estimator
C++
162
star
27

rpg_dvs_evo_open

Implementation of EVO (RA-L 17)
C++
160
star
28

deep_ev_tracker

Repository relating to "Data-driven Feature Tracking for Event Cameras" (CVPR, 2023, Award Candidate).
Python
143
star
29

fault_tolerant_control

Vision-based quadrotor fault-tolerant flight controller.
C++
139
star
30

rpg_event_representation_learning

Repo for learning event representations
Python
135
star
31

rpg_emvs

Code for the paper "EMVS: Event-based Multi-View Stereo" (IJCV, 2018)
C++
129
star
32

rpg_monocular_pose_estimator

A monocular pose estimation system based on infrared LEDs
C++
128
star
33

rpg_eklt

Code for the paper "EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames" (IJCV'19)
C++
126
star
34

agile_flight

Developing and Comparing Vision-based Algorithms for Vision-based Agile Flight
Python
124
star
35

e2calib

CVPRW 2021: How to calibrate your event camera
Python
118
star
36

rpg_vikit

Vision-Kit provides some tools for your vision/robotics project.
C++
110
star
37

rpg_asynet

Code for the paper "Event-based Asynchronous Sparse Convolutional Networks" (ECCV, 2020).
Python
105
star
38

rpg_e2depth

Code for Learning Monocular Dense Depth from Events paper (3DV20)
Python
105
star
39

rpg_ig_active_reconstruction

This repository contains the active 3D reconstruction library described in the papers: "An Information Gain Formulation for Active Volumetric 3D Reconstruction" by Isler et al. (ICRA 2016) and "A comparison of volumetric information gain metrics for active 3D object reconstruction" by Delmerico et al. (Autonomous Robots, 2017).
C++
103
star
40

fast

FAST corner detector by Edward Rosten
C++
102
star
41

deep_uncertainty_estimation

This repository provides the code used to implement the framework to provide deep learning models with total uncertainty estimates as described in "A General Framework for Uncertainty Estimation in Deep Learning" (Loquercio, SegΓΉ, Scaramuzza. RA-L 2020).
Python
102
star
42

aegnn

Python
101
star
43

rpg_corner_events

Fast Event-based Corner Detection
C++
101
star
44

snn_angular_velocity

Event-Based Angular Velocity Regression with Spiking Networks
Python
98
star
45

DSEC

Python
96
star
46

eds-buildconf

Build bootstrapping for the Event-aided Direct Sparce Odometry (EDS)
Shell
94
star
47

IROS2019-FPV-VIO-Competition

FPV Drone Racing VIO competition.
93
star
48

rpg_davis_simulator

Simulate a DAVIS camera from synthetic Blender scenes
Python
92
star
49

E-RAFT

Python
82
star
50

sim2real_drone_racing

A Framework for Zero-Shot Sim2Real Drone Racing
C++
77
star
51

learned_inertial_model_odometry

This repo contains the code of the paper "Learned Inertial Odometry for Autonomous Drone Racing", RA-L 2023.
Python
75
star
52

rpg_ramnet

Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)
Python
75
star
53

imips_open

Matching Features Without Descriptors: Implicitly Matched Interest Points
Python
73
star
54

rpg_feature_tracking_analysis

Package for performing analysis on event-based feature trackers.
Python
72
star
55

rpg_svo_pro_gps

SVO Pro with GPS
C++
71
star
56

sb_min_time_quadrotor_planning

Code for the project Minimum-Time Quadrotor Waypoint Flight in Cluttered Environments
C++
61
star
57

mh_autotune

AutoTune: Controller Tuning for High-Speed Flight
Python
55
star
58

rpg_image_reconstruction_from_events

MATLAB
52
star
59

event-based_object_catching_anymal

Code for "Event-based Agile Object Catching with a Quadrupedal Robot", Forrai et al. ICRA'23
C++
48
star
60

RVT

Implementation of "Recurrent Vision Transformers for Object Detection with Event Cameras". CVPR 2023
Python
48
star
61

ess

Repository relating to "ESS: Learning Event-based Semantic Segmentation from Still Images" (ECCV, 2022).
Python
47
star
62

colmap_utils

Python scripts and functions to work with COLMAP
Python
46
star
63

rpg_youbot_torque_control

Torque Control for the KUKA youBot Arm
C
46
star
64

rpg_time_optimal

Time-Optimal Planning for Quadrotor Waypoint Flight
Python
46
star
65

rpg_blender_omni_camera

Patch for adding an omnidirectional camera model into Blender (Cycles)
42
star
66

rpg_vi_cov_transformation

Covariance Transformation for Visual-inertial Systems
Python
40
star
67

line_tracking_with_event_cameras

C++
37
star
68

sips2_open

Succinct Interest Points from Unsupervised Inlierness Probability Learning
Python
35
star
69

cl_initial_buffer

Repository relating to "Contrastive Initial State Buffer for Reinforcement Learning" (ICRA, 2024).
Python
33
star
70

uzh_fpv_open

Repo to accompany the UZH FPV dataset
Python
32
star
71

rpg_ev-transfer

Open source implementation of RAL 2022 "Bridging the Gap between Events and Frames through Unsupervised Domain Adaptation"
Python
31
star
72

ESL

ESL: Event-based Structured Light
Python
30
star
73

IROS2020-FPV-VIO-Competition

FPV Drone Racing VIO Competition
29
star
74

flightmare_unity

C#
27
star
75

authorship_attribution

Python
27
star
76

rpg_event_lifetime

MATLAB Implementation of Event Lifetime Estimation
MATLAB
27
star
77

slam-eds

Events-aided Sparse Odometry: this is the library for the direct approach using events and frames
C++
25
star
78

fast_neon

Fast detector with NEON accelerations
C++
19
star
79

direct_event_camera_tracker

Open-source code for ICRA'19 paper Bryner et al.
C++
17
star
80

timelens-pp

Dataset Download page for the BS-ERGB dataset introduced in Time Lens++ (CVPR'22)
15
star
81

ICRA2020-FPV-VIO-Competition

FPV Drone Racing VIO competition.
12
star
82

rpg_quadrotor_common

Common functionality for rpg_quadrotor_control
C++
11
star
83

flymation

Flexible Animation for Flying Robots
C#
8
star
84

ze_oss

RPG fork of ze_oss
C++
7
star
85

slam-orogen-eds

Event-aided Direct Sparse Odometry: full system in a Rock Task component
C++
6
star
86

cvpr18_event_steering_angle

Repository of the CVPR18 paper "Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars"
Python
5
star
87

rpg_mpl_ros

C++
4
star
88

dsec-det

Code for assembling and visualizing DSEC data for the detection task.
Python
4
star
89

esfp

ESfP: Event-based Shape from Polarization (CVPR 2023)
Python
3
star
90

VAPAR

Python
3
star
91

rpg_single_board_io

GPIO and ADC functionality for single board computers
C++
3
star
92

assimp_catkin

A catkin wrapper for assimp
CMake
1
star
93

aruco_catkin

Catkinization of https://sourceforge.net/projects/aruco/
CMake
1
star
94

dodgedrone_simulation

C++
1
star
95

power_line_tracking_with_event_cameras

Python
1
star
96

pangolin_catkin

CMake
1
star
97

dlib_catkin

Catkin wrapper for https://github.com/dorian3d/DLib
CMake
1
star