• Stars
    star
    124
  • Rank 288,207 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Developing and Comparing Vision-based Algorithms for Vision-based Agile Flight

DodgeDrone: Vision-based Agile Drone Flight (ICRA 2022 Competition)

IMAGE ALT TEXT HERE

Would you like to push the boundaries of drone navigation? Then participate in the dodgedrone competition! You will get the chance to develop perception and control algorithms to navigate a drone in both static and dynamic environments. Competing in the challenge will deepen your expertise in computer vision and control, and boost your research. You can find more information at the competition website.

This codebase provides the following functionalities:

  1. A simple high-level API to evaluate your navigation policy in the Robot Operating System (ROS). This is completely independent on how you develop your algorithm.
  2. Training utilities to use reinforcement learning for the task of high-speed obstacle avoidance.

All evaluation during the competition will be performed using the same ROS evaluation, but on previously unseen environments / obstacle configurations.

Submission

  • 06 May 2022 Submission is open. Please submit your version of the file user_code.py with all needed dependencies with an email to loquercio AT berkeley DOT edu. Please use as subject ICRA 2022 Competition: Team Name. If you have specific dependencies, please provide instructions on how to install them. Feel free to switch from python to cpp if you want.

Further Details

  • We will only evaluate on the warehouse environment with spheres obstacles.
  • If you're using vision, you are free to use any sensor you like (depth, optical flow, RGB). The code has to run real-time on a desktop with 16 Intel Core i7-6900K and an NVIDIA Titan Xp.
  • If you're using vision, feel free to optimize the camera parameters for performance (e.g. field of view).
  • We will two rankings, one for vision-based and another for state-based. The top three team for each category will qualify for the finals.

Update

  • 02 May 2022 Fix a bug in the vision racing environment when computing reward function. No need to update if you are not using RL or if you have change the reward formualtion. Related to this issue #65

  • 27 March 2022 Fix a static object rendering issue. Please download the new Unity Standalone using this. Also, git pull the project.

Flight API

This library contains the core of our testing API. It will be used for evaluating all submitted policies. The API is completely independent on how you build your navigation system. You could either use our reinforcement learning interface (more on this below) or add your favourite navigation system.

Prerequisite

Before continuing, make sure to have g++ and gcc to version 9.3.0. You can check this by typing in a terminal gcc --version and g++ --version. Follow this guide if your compiler is not compatible.

In addition, make sure to have ROS installed. Follow this guide and install ROS Noetic if you don't already have it.

Installation (for ROS User)

We only support Ubuntu 20.04 with ROS noetic. Other setups are likely to work as well but not actively supported.

Start by creating a new catkin workspace.

cd     # or wherever you'd like to install this code
export ROS_VERSION=noetic
export CATKIN_WS=./icra22_competition_ws
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color

cd src
git clone [email protected]:uzh-rpg/agile_flight.git
cd agile_flight

Run the setup_ros.bash in the main folder of this repository, it will ask for sudo permissions. Then build the packages.

./setup_ros.bash

catkin build

Installation (for Python User)

If you want to develop algorithms using only Python, especially reinforcement learning, you need to install our library as python package.

Make sure that you have anaconda installed. This is highly recommanded.

Run the setup_py.bash in the main folder of this repository, it will ask for sudo permissions.

./setup_py.bash

Task

The task is to control a simulated quadrotor to fly through obstacle dense environments. The environment contains both static and dynamic obstacles. You can specifiy which difficulty level and environment you want to load for testing your algorithm. The yaml configuration file is located in this file. The goal is to proceed as fast as possible 60m in positive x-direction without colliding into obstacles and exiting a pre-defined bounding box. The parameters of the goal location and the bounding box can be found here.

environment:
  level: "medium" # three difficulty level for obstacle configurations [easy, medium, hard]
  env_folder: "environment_0" # configurations for dynamic and static obstacles, environment number are between [0 - 100]

unity:
  scene_id: 0 # 0 warehouse, 1 garage, 2 natureforest, 3 wasteland

Usage

The usage of this code base entails two main aspects: writing your algorithm and testing it in the simulator.

Writing your algorithm:

To facilitate coding of your algorithms, we provided a simple code structure for you, just edit the following file: envtest/ros/user_code.py. This file contains two functions, compute_command_vision_based and compute_command_state_based. In the vision-based case, you will get the current image and state of the quadrotor. In the state-based case, you will get the metric distance to obstacles and the state of the quadrotor. We strongly reccomend using the state-based version to start with, it is going to be much easier than working with pixels!

Depending on the part of the competition you are interested in, adapt the corresponding function. To immediately see something moving, both functions at the moment publish a command to fly straight forward, of course without avoiding any obstacles. Note that we provide three different control modes for you, ordered with increasing level of abstraction: commanding individual single-rotor thrusts (SRT), specifying mas-normalized collective thrust and bodyrates (CTBR), and outputting linear velocity commands and yawrate (LINVEL). The choice of control modality is up to you. Overall, the more low-level you go, the more difficult is going to be to mantain stability, but the more agile your drone will be.

Testing your approach in the simulator:

Make sure you have completed the installation of the flight API before continuing. To use the competition software, three steps are required:

  1. Start the simulator

    roslaunch envsim visionenv_sim.launch render:=True
    # Using the GUI, press Arm & Start to take off.
    python evaluation_node.py
    

    The evaluation node comes with a config file. There, the options to plot the results can be disabled if you want no plots.

  2. Start your user code. This code will generate control commands based on the sensory observations. You can toggle vision-based operation by providing the argument --vision_based.

    cd envtest/ros
    python run_competition.py [--vision_based]
    
  3. Tell your code to start! Until you publish this message, your code will run but the commands will not be executed. We use this to ensure fair comparison between approaches as code startup times can vary, especially for learning-based approaches.

    rostopic pub /kingfisher/start_navigation std_msgs/Empty "{}" -1
    

If you want to perform steps 1-3 automatically, you can use the launch_evaluation.bash N script provided in this folder. It will automatically perform N rollouts and then create an evaluation.yaml file which summarizes the rollout statistics.

Using Reinforcement Learning (Optional) We provide an easy interface for training your navigation policy using reinforcement learning. While this is not required for the competition, it could just make your job easier if you plan on using RL.

Follow this guide to know more about how to use the training code and some tips on how to develop reinforcement learning algorithms

More Repositories

1

event-based_vision_resources

2,212
star
2

rpg_svo

Semi-direct Visual Odometry
C++
2,013
star
3

rpg_svo_pro_open

C++
1,125
star
4

rpg_trajectory_evaluation

Toolbox for quantitative trajectory evaluation of VO/VIO
Python
852
star
5

flightmare

An Open Flexible Quadrotor Simulator
C++
756
star
6

agile_autonomy

Repository Containing the Code associated with the Paper: "Learning High-Speed Flight in the Wild"
C++
577
star
7

rpg_timelens

Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation
Python
566
star
8

rpg_quadrotor_control

Quadrotor control framework developed by the Robotics and Perception Group
C++
494
star
9

rpg_open_remode

This repository contains an implementation of REMODE (REgularized MOnocular Depth Estimation), as described in the paper.
C++
480
star
10

rpg_esim

ESIM: an Open Event Camera Simulator
C
476
star
11

agilicious

Agile flight done right!
TeX
424
star
12

vilib

CUDA Visual Library by RPG
C++
399
star
13

rpg_public_dronet

Code for the paper Dronet: Learning to Fly by Driving
Python
395
star
14

high_mpc

Policy Search for Model Predictive Control with Application to Agile Drone Flight
C
317
star
15

rpg_dvs_ros

ROS packages for DVS
C++
293
star
16

rpg_e2vid

Code for the paper "High Speed and High Dynamic Range Video with an Event Camera" (T-PAMI, 2019).
Python
275
star
17

dslam_open

Public code for "Data-Efficient Decentralized Visual SLAM"
MATLAB
270
star
18

rpg_svo_example

Example node to use the SVO Installation.
C++
268
star
19

rpg_mpc

Model Predictive Control for Quadrotors with extension to Perception-Aware MPC
C
248
star
20

rpg_vid2e

Open source implementation of CVPR 2020 "Video to Events: Recycling Video Dataset for Event Cameras"
Python
235
star
21

netvlad_tf_open

Tensorflow port of https://github.com/Relja/netvlad
Python
225
star
22

rpg_ultimate_slam_open

Open source code for "Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios" RA-L 2018
C++
225
star
23

deep_drone_acrobatics

Code for the project Deep Drone Acrobatics.
Python
178
star
24

rpg_information_field

Information Field for Perception-aware Planning
C++
170
star
25

data_driven_mpc

Python
165
star
26

rpg_vision-based_slam

This repo contains the code of the paper "Continuous-Time vs. Discrete-Time Vision-based SLAM: A Comparative Study", RA-L 2022.
C++
163
star
27

vimo

Visual-Inertial Model-based State and External Forces Estimator
C++
162
star
28

rpg_dvs_evo_open

Implementation of EVO (RA-L 17)
C++
160
star
29

deep_ev_tracker

Repository relating to "Data-driven Feature Tracking for Event Cameras" (CVPR, 2023, Award Candidate).
Python
143
star
30

fault_tolerant_control

Vision-based quadrotor fault-tolerant flight controller.
C++
139
star
31

rpg_event_representation_learning

Repo for learning event representations
Python
135
star
32

rpg_emvs

Code for the paper "EMVS: Event-based Multi-View Stereo" (IJCV, 2018)
C++
129
star
33

rpg_monocular_pose_estimator

A monocular pose estimation system based on infrared LEDs
C++
128
star
34

rpg_eklt

Code for the paper "EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames" (IJCV'19)
C++
126
star
35

e2calib

CVPRW 2021: How to calibrate your event camera
Python
118
star
36

rpg_vikit

Vision-Kit provides some tools for your vision/robotics project.
C++
110
star
37

rpg_asynet

Code for the paper "Event-based Asynchronous Sparse Convolutional Networks" (ECCV, 2020).
Python
105
star
38

rpg_e2depth

Code for Learning Monocular Dense Depth from Events paper (3DV20)
Python
105
star
39

rpg_ig_active_reconstruction

This repository contains the active 3D reconstruction library described in the papers: "An Information Gain Formulation for Active Volumetric 3D Reconstruction" by Isler et al. (ICRA 2016) and "A comparison of volumetric information gain metrics for active 3D object reconstruction" by Delmerico et al. (Autonomous Robots, 2017).
C++
103
star
40

fast

FAST corner detector by Edward Rosten
C++
102
star
41

deep_uncertainty_estimation

This repository provides the code used to implement the framework to provide deep learning models with total uncertainty estimates as described in "A General Framework for Uncertainty Estimation in Deep Learning" (Loquercio, SegΓΉ, Scaramuzza. RA-L 2020).
Python
102
star
42

aegnn

Python
101
star
43

rpg_corner_events

Fast Event-based Corner Detection
C++
101
star
44

snn_angular_velocity

Event-Based Angular Velocity Regression with Spiking Networks
Python
98
star
45

DSEC

Python
96
star
46

eds-buildconf

Build bootstrapping for the Event-aided Direct Sparce Odometry (EDS)
Shell
94
star
47

IROS2019-FPV-VIO-Competition

FPV Drone Racing VIO competition.
93
star
48

rpg_davis_simulator

Simulate a DAVIS camera from synthetic Blender scenes
Python
92
star
49

E-RAFT

Python
82
star
50

sim2real_drone_racing

A Framework for Zero-Shot Sim2Real Drone Racing
C++
77
star
51

learned_inertial_model_odometry

This repo contains the code of the paper "Learned Inertial Odometry for Autonomous Drone Racing", RA-L 2023.
Python
75
star
52

rpg_ramnet

Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)
Python
75
star
53

imips_open

Matching Features Without Descriptors: Implicitly Matched Interest Points
Python
73
star
54

rpg_feature_tracking_analysis

Package for performing analysis on event-based feature trackers.
Python
72
star
55

rpg_svo_pro_gps

SVO Pro with GPS
C++
71
star
56

sb_min_time_quadrotor_planning

Code for the project Minimum-Time Quadrotor Waypoint Flight in Cluttered Environments
C++
61
star
57

mh_autotune

AutoTune: Controller Tuning for High-Speed Flight
Python
55
star
58

rpg_image_reconstruction_from_events

MATLAB
52
star
59

event-based_object_catching_anymal

Code for "Event-based Agile Object Catching with a Quadrupedal Robot", Forrai et al. ICRA'23
C++
48
star
60

RVT

Implementation of "Recurrent Vision Transformers for Object Detection with Event Cameras". CVPR 2023
Python
48
star
61

ess

Repository relating to "ESS: Learning Event-based Semantic Segmentation from Still Images" (ECCV, 2022).
Python
47
star
62

colmap_utils

Python scripts and functions to work with COLMAP
Python
46
star
63

rpg_youbot_torque_control

Torque Control for the KUKA youBot Arm
C
46
star
64

rpg_time_optimal

Time-Optimal Planning for Quadrotor Waypoint Flight
Python
46
star
65

rpg_blender_omni_camera

Patch for adding an omnidirectional camera model into Blender (Cycles)
42
star
66

rpg_vi_cov_transformation

Covariance Transformation for Visual-inertial Systems
Python
40
star
67

line_tracking_with_event_cameras

C++
37
star
68

sips2_open

Succinct Interest Points from Unsupervised Inlierness Probability Learning
Python
35
star
69

cl_initial_buffer

Repository relating to "Contrastive Initial State Buffer for Reinforcement Learning" (ICRA, 2024).
Python
33
star
70

uzh_fpv_open

Repo to accompany the UZH FPV dataset
Python
32
star
71

rpg_ev-transfer

Open source implementation of RAL 2022 "Bridging the Gap between Events and Frames through Unsupervised Domain Adaptation"
Python
31
star
72

ESL

ESL: Event-based Structured Light
Python
30
star
73

IROS2020-FPV-VIO-Competition

FPV Drone Racing VIO Competition
29
star
74

flightmare_unity

C#
27
star
75

authorship_attribution

Python
27
star
76

rpg_event_lifetime

MATLAB Implementation of Event Lifetime Estimation
MATLAB
27
star
77

slam-eds

Events-aided Sparse Odometry: this is the library for the direct approach using events and frames
C++
25
star
78

fast_neon

Fast detector with NEON accelerations
C++
19
star
79

direct_event_camera_tracker

Open-source code for ICRA'19 paper Bryner et al.
C++
17
star
80

timelens-pp

Dataset Download page for the BS-ERGB dataset introduced in Time Lens++ (CVPR'22)
15
star
81

ICRA2020-FPV-VIO-Competition

FPV Drone Racing VIO competition.
12
star
82

rpg_quadrotor_common

Common functionality for rpg_quadrotor_control
C++
11
star
83

flymation

Flexible Animation for Flying Robots
C#
8
star
84

ze_oss

RPG fork of ze_oss
C++
7
star
85

slam-orogen-eds

Event-aided Direct Sparse Odometry: full system in a Rock Task component
C++
6
star
86

cvpr18_event_steering_angle

Repository of the CVPR18 paper "Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars"
Python
5
star
87

rpg_mpl_ros

C++
4
star
88

dsec-det

Code for assembling and visualizing DSEC data for the detection task.
Python
4
star
89

esfp

ESfP: Event-based Shape from Polarization (CVPR 2023)
Python
3
star
90

VAPAR

Python
3
star
91

rpg_single_board_io

GPIO and ADC functionality for single board computers
C++
3
star
92

assimp_catkin

A catkin wrapper for assimp
CMake
1
star
93

aruco_catkin

Catkinization of https://sourceforge.net/projects/aruco/
CMake
1
star
94

dodgedrone_simulation

C++
1
star
95

power_line_tracking_with_event_cameras

Python
1
star
96

pangolin_catkin

CMake
1
star
97

dlib_catkin

Catkin wrapper for https://github.com/dorian3d/DLib
CMake
1
star