• Stars
    star
    951
  • Rank 48,061 (Top 1.0 %)
  • Language
    C++
  • License
    GNU General Publi...
  • Created over 5 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments

Known Issues

  • If polyhedrons can't be visualized properly in Rviz, please delete the Display Type PolyhedronArray from the display menu, then manually add PolyhedronArray again and select the topic in its Topic drop-down list.

  • If using Ubuntu 18.04 and ROS melodic, you may get "error: expected constructor, destructor, or type conversion before ‘(’ token PLUGINLIB_DECLARE_CLASS(router, RouterNode, RouterNode, nodelet::Nodelet);" during compiling. Follow issue#34 to fix it.

What's New

  • We have released all packages for conducting real-world experiments, please visit experiment.

  • We now provide a new interface for controlling the drone directly with the keyboard. Check it in the following Human Interface section.

Teach-Repeat-Replan (Autonomous Drone Race)

Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments

Teach-Repeat-Replan is a complete and robust system enables Autonomous Drone Race. It contains all components for UAV aggressive flight in complex environments. It is built upon on the classical robotics teach-and-repeat framework, which is widely adopted in infrastructure inspection, aerial transportation, and search-and-rescue. Our system can capture users' intention of a flight mission, convert an arbitrarily jerky teaching trajectory to a guaranteed smooth and safe repeating trajectory, and generate safe local re-plans to avoid unmapped or moving obstacles on the flight.

Video Links: Video1 Video2                                                      Video Links (for Mainland China): Video1 Video2

Authors / Maintainers: Fei Gao, Boyu Zhou, and Xin Zhou.

Other Contributors: Luqi Wang, Kaixuan Wang.

Fei Gao and Xin Zhou are now with the Fast Lab, Zhejiang University.

Other authors are with the HUKST Aerial Robotics Group.

Sub-modules integrated into our system include:

Planner: flight corridor generation, global spatial-temporal planning, local online re-planning

Perception: global deformable surfel mapping, local online ESDF mapping

Localization: global pose graph optimization, local visual-inertial fusion

Controller: geometric controller on SE(3)

Architecture:

Our system can be applied to situations where the user has a preferable rough route but isn't able to pilot the drone ideally. For example, for drone racing or aerial filming, a beginner-level pilot is impossible to control the drone to finish the race safely or take an aerial video smoothly unless months of training. With our system, the human pilot can virtually control the drone with his/her navie operations, then our system automatically generates a very efficient repeating trajectory and autonomously execute it.

Our system can also be used for normal autonomous navigations, like our previous works in video1 and video2. For these applications, drone can autonomously fly in complex environments using only onboard sensing and planning.

Related Papers

If you use Teach-Repeat-Replan or its sub-modules for your application or research, please star this repo and cite our related papers. bib

Simulation or Real-World

To use the Teach-Repeat-Replan system in the real world, you can check this branch experiment. Compared to the master branch, experiment has modified versions of dense-surfel-mapping and stereo-VINS and an onboard controller, but without the simulator. However, to test the proposed system in simulation, the master branch is enough.

1. Prerequisites

1.1 Ubuntu and ROS

Our software is developed in Ubuntu 16.04. ROS Kinetic. ROS can be installed here: ROS Installation

1.2 convex solvers

We use Mosek for conic programming. To use mosek, you should request a free Personal Academic License here. Then create a folder named 'mosek' in your home directory and put your license in it. All header and library files are already included in this repo, so you don't need to download mosek again.

We use OOQP for quadratic programming.

  1. Get a copy of MA27 from the HSL Archive. Just select the Personal Licence (allows use without redistribution), then fill the information table. You can download it from an e-mail sent to you. Then, un-zip MA27, and follow the README in it, install it to your Ubuntu.

If you are new to Ubuntu, or too lazy to follow its README, see here, just type 3 commands in MA27's folder :

./configure
make 
sudo make install
  1. Manually un-zip packages OOQP.zip in the installation folder of this repo and install it follow the document INSTALL in OOQP, install it to your Ubuntu.

As above, you can just type 3 commands in OOQP's folder :

./configure
make 
sudo make install

NOTE: Compile MA27, you will get a static library file named libma27.a in its /src folder. Then when you compile OOQP, the original OOQP would search the libma27.a file in its current top folder. However, in this repo, I modify OOQP's configure file to let it search libma27.a in your ubuntu system. So:

Case1 - If you download OOQP by yourself (from OOQP's website), you have to copy and paste libma27.a into OOQP's folder before you compile OOQP, otherwise you would find a compile error.

Case2 - If you use OOQP from this repo, just follow the above commands without any other considerations.

1.3 some tools

To install the following dependencies, you can run the auto-install script by

  ./install_tools.sh

Then run

  ./config_gcc.sh

to finish the configuration

Or, you can manually install them one by one:

  sudo apt-get install ros-kinetic-joy
  sudo apt-get install libnlopt-dev
  sudo apt-get install libf2c2-dev
  sudo apt-get install libarmadillo-dev 
  sudo apt-get install glpk-utils libglpk-dev
  sudo apt-get install libcdd-dev

  sudo add-apt-repository ppa:ubuntu-toolchain-r/test
  sudo apt-get update
  sudo apt-get install gcc-7 g++-7
  sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 60 --slave /usr/bin/g++ g++ /usr/bin/g++-5
  sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 50 --slave /usr/bin/g++ g++ /usr/bin/g++-7

The simulator requires C++17, which needs gcc 7 to compile. When you catkin_make, the simulator would automatically select gcc 7 as its compiler, but wouldn't change your default compiler (gcc 4.8/ gcc 5).

2.Use GPU or Not

Two packages in this repo, local_sensing (in the folder local_replanner ) and polyhedron_generator have GPU, CPU two different versions. By default, they are in CPU version. By change

set(ENABLE_CUDA false)

in the CMakeList.txt in these two packages, to

set(ENABLE_CUDA true)

CUDA will be turned-on to exploit your GPU.

Please remember to also change the 'arch' and 'code' flags in the line of

    set(CUDA_NVCC_FLAGS 
      -gencode arch=compute_61,code=sm_61;
    ) 

in 'CMakeList', if you encounter compiling error due to different Nvidia graphics card you use. You can check the right code here.

local_sensing is the simulated sensors. If ENABLE_CUDA true, it mimics the depth measured by stereo cameras and renders a depth image by GPU. If ENABLE_CUDA false, it will publish pointclouds with no ray-casting. Our local mapping module automatically selects whether depth images or pointclouds as its input.

polyhedron_generator is used to find free convex polyhedrons which form the flight corridor while teaching. If ENABLE_CUDA turn on, it can run much faster (depends on the resolution and your graphics card) than ENABLE_CUDA off.

For installation of CUDA, please go to CUDA ToolKit

3.Build on ROS

I suggest creating an empty new workspace. Then clone the repository to your workspace and catkin_make:

  cd ~/your_catkin_ws/src
  git clone https://github.com/HKUST-Aerial-Robotics/Teach-Repeat-Replan.git
  cd ../
  catkin_make -j1
  source ~/your_catkin_ws/devel/setup.bash

4.Run Teach-Repeat-Replan

4.1 Human Interface

You can use either a keyboard, or a joystick to control the drone.

4.1.1 Keyboard

For keyboard, you should install pygame first, by:

sudo apt-get install python-pygame

Then start the python script key2joy in this repo.

python key2joy.py

Note, run key2joy, it will display a window named pygame window. You have to keep this window active, to input your control command from the keyboard.

4.1.2 Joystick

For joystick, we use Betop, which can be bought at TaoBao in mainland China, to control the drone virtually in simulation.

Actually, any USB joystick is fine, but its buttons may need to be re-mapped in simulation/simulator.launch.

4.2 Teaching and Repeating

The whole system is launched by

./trr_simulation.sh

Then, you can find a drone model in Rviz. Piloting the drone by your joystick/keyboard to fly around the complex environment, you can find polyhedrons are generated one by one, as:

If you go back while flying, looping polyhedrons would be deleted from the corridor:

When you feel enough for this teaching, press start button on your joystick/or press m in keyboard mode. Then global spatial-temporal planning is conducted and the drone starts tracking the gnerated repeating trajectory:

After the flight, press back button on the joystick/or press n in keyboard mode, the drone will back to the manually controlling state and all visualization is cleared. You can start another teaching again.

4.3 Re-planning

In simulation, the re-planning is triggered when collisions are reported in a horizon. We maintain a local ESDF map, which is built very efficiently on the flight, to detect collisions and provide gradient information for local trajectory optimization. The re-planning is done in a sliding-window fasion, details can be checked in paper, video, or wiki.

In following video, green curves are re-planned trajectories, blue one is the global trajectory.

Note, if you use the local_sensing with ENABLE_CUDA false, the re-planning may not be triggered during repeating. Because in this mode the sensor acquisition is assumed perfect. We will fix this as soon as possible. With ENABLE_CUDA true mode, measurement errors in the depth images can normally trigger re-plans.

5. Acknowledgements

We use Sikang Liu's tool to visualize the polyhedrons, use quickHull to find the V-representation of a convex polyhedron. We use Mosek, OOQP and NLopt for solving different problems in planning.

6. Licence

The source code is released under GPLv3 license.

7. Maintaince

We are still working on extending the proposed system and improving code reliability.

For any technical issues, please contact Fei GAO [email protected] or Boyu ZHOU [email protected].

For commercial inquiries, please contact Shaojie SHEN [email protected]

More Repositories

1

VINS-Mono

A Robust and Versatile Monocular Visual-Inertial State Estimator
C++
4,967
star
2

VINS-Fusion

An optimization-based multi-sensor state estimator
C++
3,181
star
3

Fast-Planner

A Robust and Efficient Trajectory Planner for Quadrotors
C++
2,433
star
4

A-LOAM

Advanced implementation of LOAM
C++
1,957
star
5

VINS-Mobile

Monocular Visual-Inertial State Estimator on Mobile Phones
C++
1,269
star
6

GVINS

Tightly coupled GNSS-Visual-Inertial system for locally smooth and globally consistent state estimation in complex environment.
C++
882
star
7

FUEL

An Efficient Framework for Fast UAV Exploration
C++
744
star
8

Stereo-RCNN

Code for 'Stereo R-CNN based 3D Object Detection for Autonomous Driving' (CVPR 2019)
Python
690
star
9

DenseSurfelMapping

This is the open-source version of ICRA 2019 submission "Real-time Scalable Dense Surfel Mapping"
C++
661
star
10

FIESTA

Fast Incremental Euclidean Distance Fields for Online Motion Planning of Aerial Robots
C++
617
star
11

EPSILON

C++
493
star
12

ESVO

This repository maintains the implementation of "Event-based Stereo Visual Odometry".
C++
408
star
13

Btraj

Bezier Trajectory Generation for Autonomous Quadrotor, ICRA 2018
C++
407
star
14

grad_traj_optimization

Gradient-Based Online Safe Trajectory Generator
C++
363
star
15

MonoLaneMapping

Online Monocular Lane Mapping Using Catmull-Rom Spline (IROS 2023)
Python
349
star
16

open_quadtree_mapping

This is a monocular dense mapping system corresponding to IROS 2018 "Quadtree-accelerated Real-time Monocular Dense Mapping"
Cuda
347
star
17

MVDepthNet

This repository provides PyTorch implementation for 3DV 2018 paper "MVDepthNet: real-time multiview depth estimation neural network"
Python
305
star
18

D2SLAM

$D^2$SLAM: Decentralized and Distributed Collaborative Visual-inertial SLAM System for Aerial Swarm
Jupyter Notebook
277
star
19

OmniNxt

[IROS 2024 Oral] A Fully Open-source and Compact Aerial Robot with Omnidirectional Visual Perception
255
star
20

G3Reg

A fast and robust global registration library for outdoor LiDAR point clouds.
C++
200
star
21

GVINS-Dataset

A dataset containing synchronized visual, inertial and GNSS raw measurements.
C++
197
star
22

Nxt-FC

Mini PX4 for UAV Group
Shell
187
star
23

Omni-swarm

A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swar.
Jupyter Notebook
179
star
24

spatiotemporal_semantic_corridor

Implementation of the paper "Safe Trajectory Generation For Complex Urban Environments Using Spatio-temporal Semantic Corridor".
C++
160
star
25

PredRecon

[ICRA 2023] A Prediction-boosted Planner for Fast and High-quality Autonomous Aerial Reconstruction
C++
156
star
26

FC-Planner

[ICRA 2024 Best UAV Paper Award Finalist] An Efficient Gloabl Planner for Aerial Coverage
C++
155
star
27

eudm_planner

Implementation of the paper "Efficient Uncertainty-aware Decision-making for Automated Driving Using Guided Branching".
C++
139
star
28

mockamap

a simple map generator based on ROS
C++
133
star
29

DSP

Trajectory Prediction with Graph-based Dual-scale Context Fusion
Python
132
star
30

pointcloudTraj

Trajectory generation on point clouds
C++
128
star
31

Pagor

Pyramid Semantic Graph-based Global Point Cloud Registration with Low Overlap (IROS 2023)
C++
127
star
32

Flow-Motion-Depth

This is the project page of the paper "Flow-Motion and Depth Network for Monocular Stereo and Beyond''
Python
114
star
33

gnss_comm

Basic definitions and utility functions for GNSS raw measurement processing
C++
111
star
34

SIMPL

SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for Autonomous Driving
Python
107
star
35

VINS-kidnap

a place recognition system for VINS-fusion
105
star
36

ublox_driver

A driver for u-blox receiver (ZED-F9P) with ros support
C++
102
star
37

TopoTraj

A robust UAV local planner based on the ICRA2020 paper: Robust Real-time UAV Replanning Using Guided Gradient-based Optimization and Topological Paths
90
star
38

TimeOptimizer

Optimal Time Allocation for Quadrotor Trajectory Generation
C++
83
star
39

AutoTrans

AutoTrans: A Complete Planning and Control Framework for Autonomous UAV Payload Transportation.
C++
76
star
40

LiDAR-Registration-Benchmark

LiDAR-based 3D global registration benchmark.
Python
75
star
41

Pinhole-Fisheye-Mapping

70
star
42

UniQuad

UniQuad: A Unified and Versatile Quadrotor Platform Series for UAV Research and Application
67
star
43

IMPACTOR

Impact-Aware Planning and Control for Aerial Robots with Suspended Payloads
C
67
star
44

SLABIM

An open-sourced SLAM dataset that couples with BIM (Building Information Modeling).
Python
66
star
45

HKUST-ELEC5660-Introduction-to-Aerial-Robotics

Repo for HKUST ELEC5660 Course Notes & Lab Tutorial & Project Docker
C++
57
star
46

EMSGC

This repository maintains the implementation of the paper "Event-based Motion Segmentation withSpatio-Temporal Graph Cuts".
C++
56
star
47

VINS-Fisheye

Fisheye version of VINS-Fusion
C++
52
star
48

GeometricPretraining

This is the code base for paper ``Geometric Pretraining for Monocular Depth Estimation``, the paper is currently under review. The preprint will be available when it is ready.
49
star
49

APACE

APACE: Agile and Perception-aware Trajectory Generation for Quadrotor Flights (ICRA2024)
C++
37
star
50

plan_utils

Some useful pkgs for running planning simulation.
Makefile
29
star
51

edge_alignment

Clone of https://github.com/mpkuse/edge_alignment
C++
26
star
52

mockasimulator

C++
21
star
53

probabilistic_mapping

Probabilistic Dense Mapping
C++
19
star
54

swarm_gcs

Ground Station Software for aerial robots.
JavaScript
18
star
55

stTraj

Spatial-temporal Trajectory Planning for UAV Teach-and-Repeat
15
star
56

MASSTAR

A Multi-modal Large-scale Scene Dataset and A Versatile Toolchain for Scene Prediction
13
star
57

SLIM

7
star
58

mockacam

Camera package of mocka WFB
C++
3
star
59

RI_Mocap

Mocap device driver of RI
C++
1
star