• Stars
    star
    160
  • Rank 233,334 (Top 5 %)
  • Language
    C++
  • Created over 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of EVO (RA-L 17)

EVO: Event based Visual Odometry

EVO: Event based Visual Odometry

Credit, License, and Patent

Citation

This code implements the event-based visual odometry pipeline described in the paper EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real-time by Henri Rebecq, Timo Horstschaefer, Guillermo Gallego and Davide Scaramuzza.

If you use any of this code, please cite the following publications:

@Article{RebecqEVO,
  author        = {Rebecq, Henri and Horstschaefer, Timo and Gallego, Guillermo and Scaramuzza, Davide},
  journal       = {IEEE Robotics and Automation Letters}, 
  title         = {EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time}, 
  year          = {2017},
  volume        = {2},
  number        = {2},
  pages         = {593-600},
  doi           = {10.1109/LRA.2016.2645143}
}
@InProceedings{Gehrig_2020_CVPR,
  author = {Daniel Gehrig and Mathias Gehrig and Javier Hidalgo-Carri\'o and Davide Scaramuzza},
  title = {Video to Events: Recycling Video Datasets for Event Cameras},
  booktitle = {{IEEE} Conf. Comput. Vis. Pattern Recog. (CVPR)},
  month = {June},
  year = {2020}
}

The code comes with a test dataset. If you don't have an event camera and want to run the code on more data, you can do one of the following:

Patent & License

  • The proposed EVO method is patented.

      H. Rebecq, G. Gallego, D. Scaramuzza
      Simultaneous Localization and Mapping with an Event Camera
      Pub. No.: WO/2018/037079.  International Application No.: PCT/EP2017/071331
    
  • The license is available here.

Acknowledgements

The open sourcing was curated by Antonio Terpin and Daniel Gehrig.

Table of contents

  1. Getting started
  2. Examples
  3. Running live
  4. Further improvements
  5. Additional resources on Event Cameras

Remark that this is research code, any fitness for a particular purpose is disclaimed.

Getting started

This software depends on ROS. Installation instructions can be found here. We have tested this software on Ubuntu 18.04 and ROS Melodic.

  1. Create and initialize a new catkin workspace if needed

     mkdir -p ~/catkin_ws/src && cd ~/catkin_ws/
     catkin config \
         --init --mkdirs --extend /opt/ros/melodic \
         --merge-devel --cmake-args \
         -DCMAKE_BUILD_TYPE=Release
    
  2. Clone this repository

     cd src/ && git clone [email protected]:uzh-rpg/rpg_dvs_evo_open.git
    
  3. Clone (and fix) dependencies

     ./rpg_dvs_evo_open/install.sh [ros-version] # [ros-version]: melodic, ..
    

    Substitute [ros-version] with your actual ROS distribution. For instance:

     ./rpg_dvs_evo_open/install.sh melodic
    

    Make sure to install ROS and the commonly used packages (such as rviz, rqt, ..).

    The above commands do the following:

    • First, we install the required packages.

    • Second, we clone the repositories evo relies on.

    Remark that with the above commands we install also the dependencies required to run the pipeline live. If you do not need them, you can comment the unnecessary packages from the dependencies.yaml file (davis driver).

    Please refer to this repo if there are issues with the driver installation (required for the live).

  4. Build the packages

     catkin build dvs_tracking
    

Building everything might take some time... this might be the right time for a coffee! :)

... and do not forget to source the workspace afterwards!

source ../devel/setup.bash

Examples

EVO: Event based Visual Odometry

Example Launch file Rosbag
Multi-keyframe sequence flyingroom.launch 538 MB
Desk sequence desk.launch 145 MB

To download the rosbags, you can use the following command:

wget [rosbag link] -O /path/to/download/folder/

For instance, the following will download the multi-keyframe sequence rosbag:

wget http://rpg.ifi.uzh.ch/data/EVO/code_examples/evo_flyingroom.bag -O /tmp/evo_flyingroom.bag

To run the pipeline from a rosbag, first start the pipeline as

roslaunch dvs_tracking [launch-file] auto_trigger:=true

where the specific launch file (fine-tuned) for each example is listed in the following table. The most interesting and repeatible one (due to the bootstrapping sequence, see further improvements) is the multi-keyframe sequence.

Then, once the everything is loaded, run the rosbag as

rosbag play -r 0.7 [rosbag-file]

For instance,

rosbag play -r 0.7 /tmp/evo_flyingroom.bag

Remark that we set the auto_trigger parameter to true. You can also set it to false and follow the instruction on how to run it live.

If anything fails, just try it again (give it a couple of chances!), and make sure to follow exactly the above instructions.

In the further improvements section are outlined the things that might go wrong when running the code. To improve the reliability when playing the rosbags (running live is easier), consider using -r .7, to reduce the rate. This helps especially when the hardware is not powerful enough.

For instance,

rosbag play /tmp/evo_flyingroom -r .7

Eventually, setting bootstrap_image_topic:=/dvs/image_raw will bootstrap from traditional frames and later switch to only events. This is the most reliable way currently available to bootstrap.

roslaunch dvs_tracking [launch-file] bootstrap_image_topic:=/dvs/image_raw auto_trigger:=true

Running live

Run

The procedure is analogous to the one explained in the examples:

  1. Run the ros core on the first terminal.

     roscore
    
  2. On a second terminal, launch the event camera driver.

     rosrun davis_ros_driver davis_ros_driver
    
  3. On another terminal, launch the pipeline, disabling the auto-triggering.

     roslaunch dvs_tracking live.launch auto_trigger:=false camera_name:=[calibration filename] events_topic:=[events topic]
    

    For instance:

     roslaunch dvs_tracking live.launch auto_trigger:=false camera_name:=DAVIS-ijrr events_topic:=/dvs/events
    

    If your calibrations filenames are my_camera.yaml, then use camera_name:=my_camera. Make sure to use the same name for both calibration files. If your sensor outputs events under the topic /my_sensor/events, then use events_topic:=/my_sensor/events.

    For the SVO based bootstrapper, proceed with step 4. For the fronto-planar bootstrapper, go to step 5. How to set up the fronto-planar bootstrapper is explained in the live.launch file.

    If you want to bootstrap from traditional frames, you can use the command:

     roslaunch dvs_tracking live.launch bootstrap_image_topic:=[topic raw frames] auto_trigger:=false
    

    For instance bootstrap_image_topic:=/dvs/image_raw.

    In this case, it makes sense to use the SVO-based bootstrapping only. This option is recommended to debug/improve/extend the rest of the EVO pipeline, without worrying about the quality of the bootstrapping.

  4. You should see two rqt GUI. One is the SVO GUI. Reset and start the pipeline until it tracks decently well. Make sure to set the namespace to svo.

  5. Press the Bootstrap button in the EVO GUI. This will automatically trigger the pipeline.

    Alternatively, it is also possible to trigger one module at the time:

    • Press the Start/Reset button in rqt_evo. Perform a circle (or more), and then press Update. This will trigger a map creation.
    • If the map looks correct, press Switch to tracking to start tracking with EVO. If not, reiterate the map creation.
    • As the camera moves out of the current map, the latter will be automatically updated if the Map-Expansion is enabled. You may disable Map-Expansion to track high-speed motions using the current map (single keyframe tracking).
    • The scene should have enough texture and the motions should recall the ones that you can see in the provided examples.
  6. If anything fails, just press Ctrl+C and restart the live node ;)

Some remarks:

  • The calibration files paths will be built as $(find dvs_tracking)/parameters/calib/ncamera/$(arg camera_name).yaml and $(find dvs_tracking)/parameters/calib/$(arg camera_name).yaml, where camera_name is specified as argument to the launch file. You can also set it as default in live.launch.
  • If your sensor provides frames under a topic /my_sensor/image_raw, and you want to bootstrap from the traditional frames, you can use bootstrap_image_topic:=/my_sensor/image_raw.

To run the pipeline live, first adjust the template live.launch to your sensor and scene. You can follow the following steps. Further customization, such as which bootstrapper to use, are explained in the launch file itself.

Calibration

Make sure you have updated calibration files for your event camera, in the dvs_tracking/parameters/calib folder.

Make sure your .yaml files have the same format as the provided ones (single camera format, multiple cameras format).

Remark that we have two calibration files in two different format for the same sensor. The single camera format has to be placed in the dvs_tracking/parameters/calib folder and the multiple cameras format in the dvs_tracking/parameters/calib/ncamera folder.

See this section for further references on calibration.

Tuning

Adjust the parameters in the launch file dvs_tracking/launch/template.launch.

Tuning is crucial for a good performance of the pipeline, in particular the min_depth and max_depth parameters in the mapping node, and the bootstrap node parameters.

An explanation of all the parameters for each module of the pipeline can be found in the Wiki.

The main parameters can be found in the template launch file, and are explained contextually. We still invite you to have a look at the Wiki, to discover further interesting features ;)

Module
Global parameters
Bootstrapping
Mapping
Tracking

If you are not using the fronto-planar bootstrapper, then you might need to tune SVO.

Remark that this might not be needed. You can test the svo tuning bootstrapping from traditional frames:

roslaunch dvs_tracking live.launch bootstrap_image_topic:=[topic of raw frames] auto_trigger:=[true/false]

For instance, bootstrap_image_topic:=/dvs/image_raw.

Further improvements

In the following we outline the main problems currently known and possible remedies. You are very welcome to contribute to this pipeline to make it even better!

What can go wrong TODO
Randomness due to OS scheduler, unreliable rosbags Implement rosbag data provider pattern, and ensure correctness of events consumption
Currently the pipeline uses multiple nodes. Switching to nodelets or using a single node could improve the repeatability of the rosbags.
Robustness bootstrapping: catch whenever SVO does not converge and trigger an automatic restart (what a human operator would eventually do manually).
tracking: catch whenever the tracker diverges, and re-initialize. Currently we have two parameters to predict this situation, namely min_map_size and min_n_keypoints.
Improve bootstrapping robustness Currently we have two working ways to bootstrap the pipeline from events: from SVO (feeding it with events frames) and with a fronto-planar assumption.
Reducing the assumptions required and making them more reliable would allow a better bootstrapping, reducing the gap to the bootstrapping from traditional frames (bootstrap_image_topic:=/dvs/image_raw).

Additional resources on Event Cameras

More Repositories

1

event-based_vision_resources

2,212
star
2

rpg_svo

Semi-direct Visual Odometry
C++
2,013
star
3

rpg_svo_pro_open

C++
1,125
star
4

rpg_trajectory_evaluation

Toolbox for quantitative trajectory evaluation of VO/VIO
Python
852
star
5

flightmare

An Open Flexible Quadrotor Simulator
C++
756
star
6

agile_autonomy

Repository Containing the Code associated with the Paper: "Learning High-Speed Flight in the Wild"
C++
577
star
7

rpg_timelens

Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation
Python
566
star
8

rpg_quadrotor_control

Quadrotor control framework developed by the Robotics and Perception Group
C++
494
star
9

rpg_open_remode

This repository contains an implementation of REMODE (REgularized MOnocular Depth Estimation), as described in the paper.
C++
480
star
10

rpg_esim

ESIM: an Open Event Camera Simulator
C
476
star
11

agilicious

Agile flight done right!
TeX
424
star
12

vilib

CUDA Visual Library by RPG
C++
399
star
13

rpg_public_dronet

Code for the paper Dronet: Learning to Fly by Driving
Python
395
star
14

high_mpc

Policy Search for Model Predictive Control with Application to Agile Drone Flight
C
317
star
15

rpg_dvs_ros

ROS packages for DVS
C++
290
star
16

rpg_e2vid

Code for the paper "High Speed and High Dynamic Range Video with an Event Camera" (T-PAMI, 2019).
Python
275
star
17

dslam_open

Public code for "Data-Efficient Decentralized Visual SLAM"
MATLAB
270
star
18

rpg_svo_example

Example node to use the SVO Installation.
C++
268
star
19

rpg_mpc

Model Predictive Control for Quadrotors with extension to Perception-Aware MPC
C
248
star
20

rpg_vid2e

Open source implementation of CVPR 2020 "Video to Events: Recycling Video Dataset for Event Cameras"
Python
235
star
21

netvlad_tf_open

Tensorflow port of https://github.com/Relja/netvlad
Python
225
star
22

rpg_ultimate_slam_open

Open source code for "Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios" RA-L 2018
C++
225
star
23

deep_drone_acrobatics

Code for the project Deep Drone Acrobatics.
Python
178
star
24

rpg_information_field

Information Field for Perception-aware Planning
C++
170
star
25

data_driven_mpc

Python
165
star
26

rpg_vision-based_slam

This repo contains the code of the paper "Continuous-Time vs. Discrete-Time Vision-based SLAM: A Comparative Study", RA-L 2022.
C++
163
star
27

vimo

Visual-Inertial Model-based State and External Forces Estimator
C++
162
star
28

fault_tolerant_control

Vision-based quadrotor fault-tolerant flight controller.
C++
139
star
29

deep_ev_tracker

Repository relating to "Data-driven Feature Tracking for Event Cameras" (CVPR, 2023, Award Candidate).
Python
137
star
30

rpg_event_representation_learning

Repo for learning event representations
Python
135
star
31

rpg_emvs

Code for the paper "EMVS: Event-based Multi-View Stereo" (IJCV, 2018)
C++
129
star
32

rpg_monocular_pose_estimator

A monocular pose estimation system based on infrared LEDs
C++
128
star
33

rpg_eklt

Code for the paper "EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames" (IJCV'19)
C++
126
star
34

agile_flight

Developing and Comparing Vision-based Algorithms for Vision-based Agile Flight
Python
124
star
35

e2calib

CVPRW 2021: How to calibrate your event camera
Python
118
star
36

rpg_vikit

Vision-Kit provides some tools for your vision/robotics project.
C++
110
star
37

rpg_asynet

Code for the paper "Event-based Asynchronous Sparse Convolutional Networks" (ECCV, 2020).
Python
105
star
38

rpg_e2depth

Code for Learning Monocular Dense Depth from Events paper (3DV20)
Python
105
star
39

rpg_ig_active_reconstruction

This repository contains the active 3D reconstruction library described in the papers: "An Information Gain Formulation for Active Volumetric 3D Reconstruction" by Isler et al. (ICRA 2016) and "A comparison of volumetric information gain metrics for active 3D object reconstruction" by Delmerico et al. (Autonomous Robots, 2017).
C++
103
star
40

fast

FAST corner detector by Edward Rosten
C++
102
star
41

deep_uncertainty_estimation

This repository provides the code used to implement the framework to provide deep learning models with total uncertainty estimates as described in "A General Framework for Uncertainty Estimation in Deep Learning" (Loquercio, SegΓΉ, Scaramuzza. RA-L 2020).
Python
102
star
42

rpg_corner_events

Fast Event-based Corner Detection
C++
101
star
43

snn_angular_velocity

Event-Based Angular Velocity Regression with Spiking Networks
Python
98
star
44

aegnn

Python
97
star
45

DSEC

Python
96
star
46

eds-buildconf

Build bootstrapping for the Event-aided Direct Sparce Odometry (EDS)
Shell
94
star
47

IROS2019-FPV-VIO-Competition

FPV Drone Racing VIO competition.
93
star
48

rpg_davis_simulator

Simulate a DAVIS camera from synthetic Blender scenes
Python
92
star
49

E-RAFT

Python
82
star
50

sim2real_drone_racing

A Framework for Zero-Shot Sim2Real Drone Racing
C++
77
star
51

learned_inertial_model_odometry

This repo contains the code of the paper "Learned Inertial Odometry for Autonomous Drone Racing", RA-L 2023.
Python
75
star
52

rpg_ramnet

Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)
Python
75
star
53

imips_open

Matching Features Without Descriptors: Implicitly Matched Interest Points
Python
73
star
54

rpg_feature_tracking_analysis

Package for performing analysis on event-based feature trackers.
Python
72
star
55

rpg_svo_pro_gps

SVO Pro with GPS
C++
71
star
56

sb_min_time_quadrotor_planning

Code for the project Minimum-Time Quadrotor Waypoint Flight in Cluttered Environments
C++
61
star
57

mh_autotune

AutoTune: Controller Tuning for High-Speed Flight
Python
55
star
58

rpg_image_reconstruction_from_events

MATLAB
52
star
59

event-based_object_catching_anymal

Code for "Event-based Agile Object Catching with a Quadrupedal Robot", Forrai et al. ICRA'23
C++
48
star
60

RVT

Implementation of "Recurrent Vision Transformers for Object Detection with Event Cameras". CVPR 2023
Python
48
star
61

ess

Repository relating to "ESS: Learning Event-based Semantic Segmentation from Still Images" (ECCV, 2022).
Python
47
star
62

colmap_utils

Python scripts and functions to work with COLMAP
Python
46
star
63

rpg_youbot_torque_control

Torque Control for the KUKA youBot Arm
C
46
star
64

rpg_time_optimal

Time-Optimal Planning for Quadrotor Waypoint Flight
Python
46
star
65

rpg_blender_omni_camera

Patch for adding an omnidirectional camera model into Blender (Cycles)
42
star
66

rpg_vi_cov_transformation

Covariance Transformation for Visual-inertial Systems
Python
40
star
67

line_tracking_with_event_cameras

C++
37
star
68

sips2_open

Succinct Interest Points from Unsupervised Inlierness Probability Learning
Python
35
star
69

uzh_fpv_open

Repo to accompany the UZH FPV dataset
Python
32
star
70

cl_initial_buffer

Repository relating to "Contrastive Initial State Buffer for Reinforcement Learning" (ICRA, 2024).
Python
31
star
71

rpg_ev-transfer

Open source implementation of RAL 2022 "Bridging the Gap between Events and Frames through Unsupervised Domain Adaptation"
Python
31
star
72

ESL

ESL: Event-based Structured Light
Python
30
star
73

IROS2020-FPV-VIO-Competition

FPV Drone Racing VIO Competition
29
star
74

flightmare_unity

C#
27
star
75

authorship_attribution

Python
27
star
76

rpg_event_lifetime

MATLAB Implementation of Event Lifetime Estimation
MATLAB
27
star
77

slam-eds

Events-aided Sparse Odometry: this is the library for the direct approach using events and frames
C++
25
star
78

fast_neon

Fast detector with NEON accelerations
C++
18
star
79

direct_event_camera_tracker

Open-source code for ICRA'19 paper Bryner et al.
C++
17
star
80

timelens-pp

Dataset Download page for the BS-ERGB dataset introduced in Time Lens++ (CVPR'22)
15
star
81

ICRA2020-FPV-VIO-Competition

FPV Drone Racing VIO competition.
12
star
82

rpg_quadrotor_common

Common functionality for rpg_quadrotor_control
C++
11
star
83

flymation

Flexible Animation for Flying Robots
C#
8
star
84

ze_oss

RPG fork of ze_oss
C++
7
star
85

slam-orogen-eds

Event-aided Direct Sparse Odometry: full system in a Rock Task component
C++
6
star
86

cvpr18_event_steering_angle

Repository of the CVPR18 paper "Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars"
Python
5
star
87

rpg_mpl_ros

C++
4
star
88

dsec-det

Code for assembling and visualizing DSEC data for the detection task.
Python
4
star
89

esfp

ESfP: Event-based Shape from Polarization (CVPR 2023)
Python
3
star
90

VAPAR

Python
3
star
91

rpg_single_board_io

GPIO and ADC functionality for single board computers
C++
3
star
92

assimp_catkin

A catkin wrapper for assimp
CMake
1
star
93

aruco_catkin

Catkinization of https://sourceforge.net/projects/aruco/
CMake
1
star
94

dodgedrone_simulation

C++
1
star
95

power_line_tracking_with_event_cameras

Python
1
star
96

pangolin_catkin

CMake
1
star
97

dlib_catkin

Catkin wrapper for https://github.com/dorian3d/DLib
CMake
1
star