• Stars
    star
    165
  • Rank 227,552 (Top 5 %)
  • Language
    Python
  • License
    GNU General Publi...
  • Created over 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Data-Driven MPC for Quadrotors

This repo contains the code associated to our paper Data-Driven MPC for Quadrotors.

Data-Driven MPC for Quadrotors

Citing

If you use this code in an academic context, please cite the following publication:

Paper: Data-Driven MPC for Quadrotors

Video: YouTube

@article{torrente2021data,
  title={Data-Driven MPC for Quadrotors},
  author={Torrente, Guillem and Kaufmann, Elia and Foehn, Philipp and Scaramuzza, Davide},
  journal={IEEE Robotics and Automation Letters},
  year={2021}
}

License

Copyright (C) 2020-2021 Guillem Torrente, Elia Kaufmann, Philipp Foehn, Davide Scaramuzza, Robotics and Perception Group, University of Zurich

This is research code, expect that it changes often and any fitness for a particular purpose is disclaimed. For a commercial license, please contact Davide Scaramuzza.

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <http://www.gnu.org/licenses/>.

This work depends on the ACADOS Toolkit, developed by the Optimization in Engineering Center (OPTEC) under supervision of Moritz Diehl. Licensing detail can be found on the ACADOS github. It is released under the BSD license.

Installation

Minimal Requirements

The code was tested with Ubuntu 18.04, Python 3.6 and ROS Melodic. We additionally provide python3.8 support tested with ROS Noetic in Ubuntu 20.04 in the branch python3.8_support. Different OS and ROS versions are possible but not supported.

Recommended: Create a Python virtual environment for this package:

sudo pip3 install virtualenv
cd <PATH_TO_VENV_DIRECTORY>
virtualenv gp_mpc_venv --python=/usr/bin/python3.6
source gp_mpc_venv/bin/activate

Installation of acados and its Python interface :

Additional Requirements

The code that runs on the Gazebo Simulation environment builds on rpg_quadrotor_control. You may skip this step if intending to use only the Simplified Simulation.
Otherwise, create a catkin workspace following these installation instructions.
After these steps you should have all the ROS packages required to run the RPG Quadrotor simulation also in the Gazebo Simulation.

Initial setup

  1. Source Python virtual environment if created.

    source <path_to_gp_mpc_venv>/bin/activate
    
  2. Clone this repository into your catkin workspace.

    cd <CATKIN_WS_DIR>
    git clone https://github.com/uzh-rpg/data_driven_mpc.git
    
  3. Install the rest of required Python libraries:

    cd data_driven_mpc
    python setup.py install
    
  4. Build the catkin workspace:

    cd <CATKIN_WS_DIR>
    catkin build
    source devel/setup.bash
    

Running the package in Simulation

We provide instructions of how to use this package in two different simulators. In the paper we call them Simplified Simulation and Gazebo Simulation. While the Simplified Simulation is a lightweight Python simulator, the Gazebo Simulation builds on the well-known RotorS extension.

First make sure to add to your Python path the main directory of this package. Also activate the virtual environment if created.

export PYTHONPATH=$PYTHONPATH:<CATKIN_WS_DIR>/src/data_driven_mpc/ros_gp_mpc

First steps

To verify the correct installation of the package, execute first a test flight on the Simplified Simulation.

roscd ros_gp_mpc
python src/experiments/trajectory_test.py

After the simulation finishes, a correct installation should produce a result very similar to the following (Mean optinization time may vary).

:::::::::::::: SIMULATION SETUP ::::::::::::::

Simulation: Applied disturbances: 
{"noisy": true, "drag": true, "payload": false, "motor_noise": true}

Model: No regression model loaded

Reference: Executed trajectory `loop` with a peak axial velocity of 8 m/s, and a maximum speed of 8.273 m/s

::::::::::::: SIMULATION RESULTS :::::::::::::

Mean optimization time: 1.488 ms
Tracking RMSE: 0.2410 m

Further details

You may edit the configuration variables for the Simplified Simulator in the file config/configuration_parameters.py for better visualization. Within the class SimpleSimConfig:

# Set to True to show a real-time Matplotlib animation of the experiments for the Simplified Simulator. Execution 
# will be slower if the GUI is turned on. Note: setting to True may require some further library installation work.
custom_sim_gui = True

# Set to True to display a plot describing the trajectory tracking results after the execution.
result_plots = True

Also, note that in this configuration file the disturbance settings of the Simplified Simulation are defined. Setting all of them to False reproduces the Ideal (as we call in our paper) scenario where the MPC has perfect knowledge of the quadrotor dynamics, and therefore will yield a much lower tracking error:

# Choice of disturbances modeled in our Simplified Simulator. For more details about the parameters used refer to 
# the script: src/quad_mpc/quad_3d.py.
simulation_disturbances = {
    "noisy": True,                       # Thrust and torque gaussian noises
    "drag": True,                        # 2nd order polynomial aerodynamic drag effect
    "payload": False,                    # Payload force in the Z axis
    "motor_noise": True                  # Asymmetric voltage noise in the motors
}

You may also vary the peak velocity and acceleration of the reference trajectory, or use the lemniscate trajectory instead of the circle (loop) one. All of these options can be specified in the script arguments. Further information can be displayed by typing:

python src/experiments/trajectory_test.py --help

Model Fitting Tutorial

Next, we collect a dataset for fitting GP and RDRv models in the Simplified Simulator. This procedure will be very similar for the Gazebo Simulator (explained later).

Data collection

First, run the following script to collect a few minutes of flight samples.

python src/experiments/point_tracking_and_record.py --recording --dataset_name simplified_sim_dataset --simulation_time 300

After the simulation ends, you can verify that the collected data now appears at the directory ros_gp_mpc/data/simplified_sim_dataset. We can use this data to fit our regression models.

Fitting a GP model

First, edit the following variables of configuration file in config/configuration_parameters.py (class ModelFitConfig) so that the training script is referenced to the desired dataset. For redundancy, in order to identify the correct data file, we require to specify both the name of the dataset as well as the parameters used while acquiring the data. In other words, you must input the simulator options used while running the previous python script. If you did not modify these variables earlier, you don't need to change anything this time as the default setting will work:

    # ## Dataset loading ## #
    ds_name = "simplified_sim_dataset"
    ds_metadata = {
        "noisy": True,
        "drag": True,
        "payload": False,
        "motor_noise": True
    }

In our approach, we train 3 independent GP's for every velocity dimension v_x, v_y, v_z, so we run three times the GP training script. To indicate that the model must map v_x to acceleration correction a_x (and similarly with y and z), run the following commands. Indices 7,8,9 correspond to v_x, v_y, v_z respectively in our data structures. The arguments --x and --y are used to specify the X and Y variables of the regression problem. We assign a name to the model for future referencing, e.g.: simple_sim_gp:

python src/model_fitting/gp_fitting.py --n_points 20 --model_name simple_sim_gp --x 7 --y 7
python src/model_fitting/gp_fitting.py --n_points 20 --model_name simple_sim_gp --x 8 --y 8
python src/model_fitting/gp_fitting.py --n_points 20 --model_name simple_sim_gp --x 9 --y 9

The models will be saved under the directory ros_gp_mpc/results/model_fitting/<git_hash>/.

You can visualize the performance of the combined three models using the visualization script. Make sure to input the correct model version (git hash) and model name.

python src/model_fitting/gp_visualization.py --model_name simple_sim_gp --model_version <git_hash>

Fitting an RDRv model

Similarly, we train the RDRv model with the following one-line command. This script trains all three dimensions simultaneously and provides a plot of the fitting result. The model is similarly saved under the directory ros_gp_mpc/results/model_fitting/<git_hash>/ with the given name (e.g.: simple_sim_rdrv).

python src/model_fitting/rdrv_fitting.py --model_name simple_sim_rdrv --x 7 8 9

Model comparison experiment

To compare the trained models, we provide an automatic script for the Simplified Simulation. Running the following command will compare the specified models with the "Ideal" and the "Nominal" scenarios by default, and produce several results plots in the directory: results/images/. Using the --fast argument will run the script faster with less velocity samples.

python src/experiments/comparative_experiment.py --model_version <git_hash_1 git_hash_2 ...> --model_name <name_1 name_2 ...> --model_type <type_1 type_2> --fast

For example:

python src/experiments/comparative_experiment.py --model_version 42b8650b 42b8650b --model_name simple_sim_gp simple_sim_rdrv --model_type gp rdrv --fast

mse-example

Gazebo simulator

In this section, we demonstrate how to use our repository on the Gazebo Simulator.

Preparing the simulation environment

First, follow the installation guide for the rpg_quadrotor_package if you didn't do it previously.

Then, run an empty world simulation, and enable the command override function. Due to the increased computational demand of running the Gazebo simulator in parallel to the controller, the following launchfile runs the gazebo simulator at 50% speed:

roslaunch ros_gp_mpc quadrotor_empty_world.launch enable_command_feedthrough:=True

Finally, click Connect and Arm Bridge on the RPG Quadrotor GUI.

Dataset recording, model fitting & evaluation in Gazebo

Run the following script to execute several random trajectories on the Gazebo Simulator and compile a dataset of the measured errors.

roslaunch ros_gp_mpc gp_mpc_wrapper.launch recording:=True dataset_name:=gazebo_dataset environment:=gazebo flight_mode:=random n_seeds:=10

Leave the script running until it outputs the following message:

[INFO] [1612101145.957326, 230.510000]: No more references will be received

Update the ModelFitConfig class from config/configuration_parameters.py file to point the training scripts to the new dataset:

# ## Dataset loading ## #
ds_name = "gazebo_dataset"
ds_metadata = {
    "gazebo": "default",
}

Train a new GP (or RDRv) model as before:

python src/model_fitting/gp_fitting.py --n_points 20 --model_name gazebo_sim_gp --x 7 --y 7
python src/model_fitting/gp_fitting.py --n_points 20 --model_name gazebo_sim_gp --x 8 --y 8
python src/model_fitting/gp_fitting.py --n_points 20 --model_name gazebo_sim_gp --x 9 --y 9

We don't provide an automatic script to compare models in the Gazebo environment. However, you can do it manually by following the next steps:

Run a Circle trajectory without correction:

Run the following launch file. Set plot:=True for displaying a plot of the trajectory that will be executed beforehand, and the tracking performance after finishing the run. Close the plot and the tracking will start automatically:

roslaunch ros_gp_mpc gp_mpc_wrapper.launch environment:=gazebo flight_mode:=loop plot:=True

Should result in an a verage tracking error of 0.2m when the maximum reference axial velocity is 10 m/s. Note that the drone does not reach this velocity because of the aerodynamic effects modeled in Gazebo:

[INFO] [1607523334.300131, 925.860000]: Tracking complete. Total RMSE: 0.20400 m. Max axial vel: 9.415. Mean optimization time: 7.938 ms

Run a Circle trajectory with correction

roslaunch ros_gp_mpc gp_mpc_wrapper.launch environment:=gazebo flight_mode:=loop plot:=True model_version:=<git_hash> model_name:=gazebo_sim_gp model_type:=gp

Will improve the tracking performance by around 50%, resulting in an average tracking error of 0.1m at the same speed:

[INFO] [1607523961.356929, 1239.280000]: Tracking complete. Total RMSE: 0.09487 m. Max axial vel: 9.714. Mean optimization time: 11.019 ms

Final notes

  • Trajectory types

    The user can also run a lemniscate trajectory by setting: flight_mode:=lemniscate. It is also possible to edit the reference trajectories of the circle and lemniscate by modifying the file: config/circle_and_lemniscate_options.yaml

  • Thrust level control

    Even though the MPC model operates at thrust level control, currently the ROS node sends total thrust + body rate commands. To switch to single thrust level control, edit the following line from the MPC ros interface file:

    From (body rate control):

    next_control.control_mode = 2
    

    Instead switch to (thrust level control):

    next_control.control_mode = 4
    

More Repositories

1

event-based_vision_resources

2,212
star
2

rpg_svo

Semi-direct Visual Odometry
C++
2,013
star
3

rpg_svo_pro_open

C++
1,125
star
4

rpg_trajectory_evaluation

Toolbox for quantitative trajectory evaluation of VO/VIO
Python
852
star
5

flightmare

An Open Flexible Quadrotor Simulator
C++
756
star
6

agile_autonomy

Repository Containing the Code associated with the Paper: "Learning High-Speed Flight in the Wild"
C++
577
star
7

rpg_timelens

Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation
Python
566
star
8

rpg_quadrotor_control

Quadrotor control framework developed by the Robotics and Perception Group
C++
494
star
9

rpg_open_remode

This repository contains an implementation of REMODE (REgularized MOnocular Depth Estimation), as described in the paper.
C++
480
star
10

rpg_esim

ESIM: an Open Event Camera Simulator
C
476
star
11

agilicious

Agile flight done right!
TeX
424
star
12

vilib

CUDA Visual Library by RPG
C++
399
star
13

rpg_public_dronet

Code for the paper Dronet: Learning to Fly by Driving
Python
395
star
14

high_mpc

Policy Search for Model Predictive Control with Application to Agile Drone Flight
C
317
star
15

rpg_dvs_ros

ROS packages for DVS
C++
290
star
16

rpg_e2vid

Code for the paper "High Speed and High Dynamic Range Video with an Event Camera" (T-PAMI, 2019).
Python
275
star
17

dslam_open

Public code for "Data-Efficient Decentralized Visual SLAM"
MATLAB
270
star
18

rpg_svo_example

Example node to use the SVO Installation.
C++
268
star
19

rpg_mpc

Model Predictive Control for Quadrotors with extension to Perception-Aware MPC
C
248
star
20

rpg_vid2e

Open source implementation of CVPR 2020 "Video to Events: Recycling Video Dataset for Event Cameras"
Python
235
star
21

netvlad_tf_open

Tensorflow port of https://github.com/Relja/netvlad
Python
225
star
22

rpg_ultimate_slam_open

Open source code for "Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios" RA-L 2018
C++
225
star
23

deep_drone_acrobatics

Code for the project Deep Drone Acrobatics.
Python
178
star
24

rpg_information_field

Information Field for Perception-aware Planning
C++
170
star
25

rpg_vision-based_slam

This repo contains the code of the paper "Continuous-Time vs. Discrete-Time Vision-based SLAM: A Comparative Study", RA-L 2022.
C++
163
star
26

vimo

Visual-Inertial Model-based State and External Forces Estimator
C++
162
star
27

rpg_dvs_evo_open

Implementation of EVO (RA-L 17)
C++
160
star
28

fault_tolerant_control

Vision-based quadrotor fault-tolerant flight controller.
C++
139
star
29

deep_ev_tracker

Repository relating to "Data-driven Feature Tracking for Event Cameras" (CVPR, 2023, Award Candidate).
Python
137
star
30

rpg_event_representation_learning

Repo for learning event representations
Python
135
star
31

rpg_emvs

Code for the paper "EMVS: Event-based Multi-View Stereo" (IJCV, 2018)
C++
129
star
32

rpg_monocular_pose_estimator

A monocular pose estimation system based on infrared LEDs
C++
128
star
33

rpg_eklt

Code for the paper "EKLT: Asynchronous, Photometric Feature Tracking using Events and Frames" (IJCV'19)
C++
126
star
34

agile_flight

Developing and Comparing Vision-based Algorithms for Vision-based Agile Flight
Python
124
star
35

e2calib

CVPRW 2021: How to calibrate your event camera
Python
118
star
36

rpg_vikit

Vision-Kit provides some tools for your vision/robotics project.
C++
110
star
37

rpg_asynet

Code for the paper "Event-based Asynchronous Sparse Convolutional Networks" (ECCV, 2020).
Python
105
star
38

rpg_e2depth

Code for Learning Monocular Dense Depth from Events paper (3DV20)
Python
105
star
39

rpg_ig_active_reconstruction

This repository contains the active 3D reconstruction library described in the papers: "An Information Gain Formulation for Active Volumetric 3D Reconstruction" by Isler et al. (ICRA 2016) and "A comparison of volumetric information gain metrics for active 3D object reconstruction" by Delmerico et al. (Autonomous Robots, 2017).
C++
103
star
40

fast

FAST corner detector by Edward Rosten
C++
102
star
41

deep_uncertainty_estimation

This repository provides the code used to implement the framework to provide deep learning models with total uncertainty estimates as described in "A General Framework for Uncertainty Estimation in Deep Learning" (Loquercio, Segù, Scaramuzza. RA-L 2020).
Python
102
star
42

rpg_corner_events

Fast Event-based Corner Detection
C++
101
star
43

snn_angular_velocity

Event-Based Angular Velocity Regression with Spiking Networks
Python
98
star
44

aegnn

Python
97
star
45

DSEC

Python
96
star
46

eds-buildconf

Build bootstrapping for the Event-aided Direct Sparce Odometry (EDS)
Shell
94
star
47

IROS2019-FPV-VIO-Competition

FPV Drone Racing VIO competition.
93
star
48

rpg_davis_simulator

Simulate a DAVIS camera from synthetic Blender scenes
Python
92
star
49

E-RAFT

Python
82
star
50

sim2real_drone_racing

A Framework for Zero-Shot Sim2Real Drone Racing
C++
77
star
51

learned_inertial_model_odometry

This repo contains the code of the paper "Learned Inertial Odometry for Autonomous Drone Racing", RA-L 2023.
Python
75
star
52

rpg_ramnet

Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)
Python
75
star
53

imips_open

Matching Features Without Descriptors: Implicitly Matched Interest Points
Python
73
star
54

rpg_feature_tracking_analysis

Package for performing analysis on event-based feature trackers.
Python
72
star
55

rpg_svo_pro_gps

SVO Pro with GPS
C++
71
star
56

sb_min_time_quadrotor_planning

Code for the project Minimum-Time Quadrotor Waypoint Flight in Cluttered Environments
C++
61
star
57

mh_autotune

AutoTune: Controller Tuning for High-Speed Flight
Python
55
star
58

rpg_image_reconstruction_from_events

MATLAB
52
star
59

event-based_object_catching_anymal

Code for "Event-based Agile Object Catching with a Quadrupedal Robot", Forrai et al. ICRA'23
C++
48
star
60

RVT

Implementation of "Recurrent Vision Transformers for Object Detection with Event Cameras". CVPR 2023
Python
48
star
61

ess

Repository relating to "ESS: Learning Event-based Semantic Segmentation from Still Images" (ECCV, 2022).
Python
47
star
62

colmap_utils

Python scripts and functions to work with COLMAP
Python
46
star
63

rpg_youbot_torque_control

Torque Control for the KUKA youBot Arm
C
46
star
64

rpg_time_optimal

Time-Optimal Planning for Quadrotor Waypoint Flight
Python
46
star
65

rpg_blender_omni_camera

Patch for adding an omnidirectional camera model into Blender (Cycles)
42
star
66

rpg_vi_cov_transformation

Covariance Transformation for Visual-inertial Systems
Python
40
star
67

line_tracking_with_event_cameras

C++
37
star
68

sips2_open

Succinct Interest Points from Unsupervised Inlierness Probability Learning
Python
35
star
69

uzh_fpv_open

Repo to accompany the UZH FPV dataset
Python
32
star
70

cl_initial_buffer

Repository relating to "Contrastive Initial State Buffer for Reinforcement Learning" (ICRA, 2024).
Python
31
star
71

rpg_ev-transfer

Open source implementation of RAL 2022 "Bridging the Gap between Events and Frames through Unsupervised Domain Adaptation"
Python
31
star
72

ESL

ESL: Event-based Structured Light
Python
30
star
73

IROS2020-FPV-VIO-Competition

FPV Drone Racing VIO Competition
29
star
74

flightmare_unity

C#
27
star
75

authorship_attribution

Python
27
star
76

rpg_event_lifetime

MATLAB Implementation of Event Lifetime Estimation
MATLAB
27
star
77

slam-eds

Events-aided Sparse Odometry: this is the library for the direct approach using events and frames
C++
25
star
78

fast_neon

Fast detector with NEON accelerations
C++
18
star
79

direct_event_camera_tracker

Open-source code for ICRA'19 paper Bryner et al.
C++
17
star
80

timelens-pp

Dataset Download page for the BS-ERGB dataset introduced in Time Lens++ (CVPR'22)
15
star
81

ICRA2020-FPV-VIO-Competition

FPV Drone Racing VIO competition.
12
star
82

rpg_quadrotor_common

Common functionality for rpg_quadrotor_control
C++
11
star
83

flymation

Flexible Animation for Flying Robots
C#
8
star
84

ze_oss

RPG fork of ze_oss
C++
7
star
85

slam-orogen-eds

Event-aided Direct Sparse Odometry: full system in a Rock Task component
C++
6
star
86

cvpr18_event_steering_angle

Repository of the CVPR18 paper "Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars"
Python
5
star
87

rpg_mpl_ros

C++
4
star
88

dsec-det

Code for assembling and visualizing DSEC data for the detection task.
Python
4
star
89

esfp

ESfP: Event-based Shape from Polarization (CVPR 2023)
Python
3
star
90

VAPAR

Python
3
star
91

rpg_single_board_io

GPIO and ADC functionality for single board computers
C++
3
star
92

assimp_catkin

A catkin wrapper for assimp
CMake
1
star
93

aruco_catkin

Catkinization of https://sourceforge.net/projects/aruco/
CMake
1
star
94

dodgedrone_simulation

C++
1
star
95

power_line_tracking_with_event_cameras

Python
1
star
96

pangolin_catkin

CMake
1
star
97

dlib_catkin

Catkin wrapper for https://github.com/dorian3d/DLib
CMake
1
star