• Stars
    star
    444
  • Rank 98,300 (Top 2 %)
  • Language
    C++
  • Created almost 10 years ago
  • Updated almost 6 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An open-source version of the Chisel chunked TSDF library.

OpenChisel

An open-source version of the Chisel chunked TSDF library. It contains two packages:

open_chisel

open_chisel is an implementation of a generic truncated signed distance field (TSDF) 3D mapping library; based on the Chisel mapping framework developed originally for Google's Project Tango. It is a complete re-write of the original mapping system (which is proprietary). open_chisel is chunked and spatially hashed inspired by this work from Neissner et. al, making it more memory-efficient than fixed-grid mapping approaches, and more performant than octree-based approaches. A technical description of how it works can be found in our RSS 2015 paper.

This reference implementation does not include any pose estimation. Therefore the pose of the sensor must be provided from an external source. This implementation also avoids the use of any GPU computing, which makes it suitable for limited hardware platforms. It does not contain any system for rendering/displaying the resulting 3D reconstruction. It has been tested on Ubuntu 14.04 in Linux with ROS hydro/indigo.

API Usage

Check the chisel_ros package source for an example of how to use the API. The ChiselServer class makes use of the chisel_ros API.

Dependencies

Compilation note: For speed, it is essential to compile open_chisel with optimization. You will need to add the flag -DCMAKE_BUILD_TYPE=Release to your catkin_make command when building.

chisel_ros

chisel_ros is a wrapper around open_chisel that interfaces with ROS-based depth and color sensors. The main class chisel_ros provides is ChiselServer, which subscribes to depth images, color images, TF frames, and camera intrinsics.

Note: you will also need to get the messages package, chisel_msgs to build this.

Supported ROS image types:

Depth Images

  • 32 bit floating point mono in meters (32FC1)
  • 16 bit unsigned characters in millimeters (16UC1)

Color Images

  • BRG8
  • BGRA8
  • Mono8

Dependencies

  • Eigen
  • C++11
  • catkin (ros-hydro or ros-indigo or higher)
  • PCL 1.8 compiled with stdC++11 enabled.
  • ROS OpenCV cv_bridge

A note on PCL

Unfortunately, PCL 1.7x (the standard PCL included in current versions of ROS) doesn't work with C++11. This project makes use of C++11, so in order to use Chisel, you will have to download and install PCL 1.8 from source, and compile it with C++11 enabled.

  1. Download PCL 1.8 from here: https://github.com/PointCloudLibrary/pcl
  2. Modify line 112 of CMakeLists.txt in PCL to say SET(CMAKE_CXX_FLAGS "-Wall -std=c++11 ...
  3. Build and install PCL 1.8
  4. Download pcl_ros from here: https://github.com/ros-perception/perception_pcl
  5. Change the dependency from PCL to PCL 1.8 in find_package of the CMakeLists.txt
  6. Compile pcl_ros.
  7. Rebuild Chisel

If PCL does not gain c++11 support by default soon, we may just get rid of c++11 in OpenChisel and use boost instead.

Launching chisel_ros Server

Once built, the chisel_ros server can be launched by using a launch file. There's an example launch file located at chisel_ros/launch/launch_kinect_local.launch. Modify the parameters as necessary to connect to your camera and TF frame.

<launch>
    <!-- Use a different machine name to connect to a different ROS server-->
    <machine name="local" address="localhost" default="true"/>
    <!-- The chisel server node-->
    <node name="Chisel" pkg="chisel_ros" type="ChiselNode" output="screen"> 
        <!-- Size of the TSDF chunks in number of voxels -->
        <param name="chunk_size_x" value="16"/>
        <param name="chunk_size_y" value="16"/>
        <param name="chunk_size_z" value="16"/>
        <!--- The distance away from the surface (in cm) we are willing to reconstuct -->
        <param name="truncation_scale" value="10.0"/>
        <!-- Whether to use voxel carving. If set to true, space near the sensor will be
             carved away, allowing for moving objects and other inconsistencies to disappear -->
        <param name="use_voxel_carving" value="true"/>
        <!-- When true, the mesh will get colorized by the color image.-->
        <param name="use_color" value="false"/>
        <!-- The distance from the surface (in meters) which will get carved away when
             inconsistencies are detected (see use_voxel_carving)-->
        <param name="carving_dist_m" value="0.05"/>
        <!-- The size of each TSDF voxel in meters-->
        <param name="voxel_resolution_m" value="0.025"/>
        <!-- The maximum distance (in meters) that will be constructed. Use lower values
             for close-up reconstructions and to save on memory. -->
        <param name="far_plane_dist" value="1.5"/>
        <!-- Name of the TF frame corresponding to the fixed (world) frame -->
        <param name="base_transform" value="/base_link"/>
        <!-- Name of the TF frame associated with the depth image. Z points forward, Y down, and X right -->
        <param name="depth_image_transform" value="/camera_depth_optical_frame"/>
        <!-- Name of the TF frame associated with the color image -->
        <param name="color_image_transform" value="/camera_rgb_optical_frame"/>
        <!-- Mode to use for reconstruction. There are two modes: DepthImage and PointCloud.
             Only use PointCloud if no depth image is available. It is *much* slower-->
        <param name="fusion_mode" value="DepthImage"/>
    
        <!-- Name of the depth image to use for reconstruction -->
        <remap from="/depth_image" to="/camera/depth/image"/>
        <!-- Name of the CameraInfo (intrinsic calibration) topic for the depth image. -->
        <remap from="/depth_camera_info" to="/camera/depth/camera_info"/>
        <!-- Name of the color image topic -->
        <remap from="/color_image" to="/camera/color/image"/>
        <!-- Name of the color camera's CameraInfo topic -->
        <remap from="/color_camera_info" to="/camera/color/camera_info"/>
        
        <!-- Name of a point cloud to use for reconstruction. Only use this if no depth image is available -->
        <remap from="/camera/depth_registered/points" to="/camera/depth/points"/>
    </node>
</launch>

Then, launch the server using roslaunch chisel_ros <your_launchfile>.launch. You should see an output saying that open_chisel received depth images. Now, you can visualize the results in rviz.

Type rosrun rviz rviz to open up the RVIZ visualizer. Then, add a Marker topic with the name /Chisel/full_mesh. This topic displays the mesh reconstructed by Chisel.

Services

chisel_ros provides several ROS services you can use to interface with the reconstruction in real-time. These are:

  • Reset -- Deletes all the TSDF data and starts the reconstruction from scratch.
  • TogglePaused -- Pauses/Unpauses reconstruction
  • SaveMesh -- Saves a PLY mesh file to the desired location of the entire scene
  • GetAllChunks -- Returns a list of all of the voxel data in the scene. Each chunk is stored as a seperate entity with its data stored in a byte array.

More Repositories

1

aikido

Artificial Intelligence for Kinematics, Dynamics, and Optimization
C++
214
star
2

prpy

Python utilities used by the Personal Robotics Laboratory.
Python
58
star
3

apriltags

ROS wrapper for the Apriltags visual fiducial tracker
PostScript
39
star
4

or_ompl

OpenRAVE bindings for OMPL motion planning algorithms.
C++
26
star
5

kinfu_ros

kinfu_ros is a version of kinfu_remake for generic ROS depth cameras
C++
26
star
6

apriltags_rgbd_node

more accurate apriltag detection using the depth sensor
Python
25
star
7

appl

Mirror of the Approximate POMDP Planning (APPL) C++ toolkit for POMDP planning.
C++
20
star
8

or_urdf

OpenRAVE plugin for loading URDF and SRDF files.
C++
19
star
9

tsr

Python Library for using Task Space Regions
Python
17
star
10

or_cdchomp

OpenRAVE plugin that implements the CHOMP trajectory optimizer.
C++
15
star
11

dartpy

🎯 🐍 Python bindings for the Dynamic Animation and Robotics Toolkit
C++
15
star
12

chimera

🐍 A CLI tool for generating Boost.Python/pybind11 bindings from C/C++
C++
13
star
13

LRA-star

Lazy Receding Horizon A*
C++
13
star
14

collaborative_manipulation_corpus

A Corpus of Natural Language Instructions for Collaborative Manipulation
Python
12
star
15

CCIL

Code release and project site for "CCIL: Continuity-based Data Augmentation for Corrective Imitation Learning"
Python
12
star
16

k2_bridge

Server application for Kinect for Windows v2
C#
10
star
17

openvr_ros_bridge

Publish from openvr/windows to ROS over rosbridge
Terra
10
star
18

boost_numpy_eigen

🐍 Python bindings for conversion between numpy and Eigen objects
C++
10
star
19

lego

LEGO : Leveraging Experience with Graph Oracles
Jupyter Notebook
9
star
20

gazetracking

Eye gaze tracking using Pupil Labs head-mounted tracker.
CMake
8
star
21

libada

C++ library for simulating and running ADA based on DART and AIKIDO
C++
8
star
22

pr_assets

OpenRAVE data used by the Personal Robotics Lab at CMU.
Python
7
star
23

linemod

C++
6
star
24

lemur

Lazily Evaluated Marginal Utility Roadmaps
C++
6
star
25

pr_behavior_tree

A simple python behavior tree library based on coroutines
Python
6
star
26

or_parabolicsmoother

An OpenRAVE Plugin for Parabolic Smoothing
C++
6
star
27

herb_description

URDF and SRDF descriptions of HERB.
Python
5
star
28

chisel_msgs

Breaking out chisel ROS messages into their own package to be used remotely
CMake
5
star
29

or_rviz

OpenRAVE viewer plugin that publishes the environment to RViz as InteractiveMarkers.
C++
5
star
30

pr-rosinstalls

wstool .rosinstall files for various projects or setups
4
star
31

gls

Generalized Lazy Search
C++
4
star
32

herbpy

Python library for interacting with HERB.
Python
4
star
33

comps

Fork of the Constrained Manipulation Planning Suite (CoMPS) by Dmitry Berenson
C++
4
star
34

moped

C
4
star
35

ada_feeding

Robot-assisted feeding demos and projects for the ADA robot
Python
4
star
36

batching_pomp

Research code repository that implements anytime geometric motion planning on large dense roadmaps with densification strategies and searching via the POMP algorithm.
C++
4
star
37

web-interface-examples

Examples of connecting ROS to web technologies
Python
3
star
38

pr_ros_controllers

ros_control controller plugins developed by the Personal Robotics Lab at CMU
C++
3
star
39

TouchFilter2D

Experiments with touch in 2D using openframeworks
C++
3
star
40

joint_state_recorder

Record ROS JointState messages and index them by time.
C++
3
star
41

rewd_controllers

master
C++
3
star
42

docker-public-images

Public Docker images used in PRL codebase
Makefile
3
star
43

stargazer

ROS driver for the Hagisonic StarGazer
Python
3
star
44

or_fcl

OpenRAVE bindings for the Flexible Collision Checking Library (FCL).
C++
3
star
45

wampy

Python library for interacting with Barrett WAM arm in OpenRAVE.
Python
3
star
46

offscreen_render

A utility for rendering OpenRAVE kinbodies offscreen to get properties like depth, occlusion, color, etc.
C++
3
star
47

bayesian_policy_optimization

2
star
48

libhuman

C++
2
star
49

or_trajopt

OpenRAVE plugin to expose TrajOpt code as an OpenRAVE planner
Python
2
star
50

PRLPlot

A collection of simple opinionated plotting tools for generating high-quality plots for scientific papers.
Python
2
star
51

ada_assistance_policy

Python
2
star
52

benchmarks

Benchmarks to measure OpenRAVE's performance.
C++
2
star
53

pubs

BibTeX entries of publications
TeX
2
star
54

owd

OpenWAM ROS driver for controlling the Barrett WAM and BarrettHand.
C++
2
star
55

softkinetic_driver

A driver for the depthsense325 adapted specifically for ADA. Based on a driver from IPA.
C++
2
star
56

ada_ros2

ROS2 Hardware Interface and Description for the ADA Robot
Python
2
star
57

pointcloud_filter

Filters the point cloud data from the kinect sensor
C++
1
star
58

uvc_engine_pupil

C++
1
star
59

food_detector

Python
1
star
60

k2_client

ROS node for a Kinect 2 via a TCP server.
C++
1
star
61

btclient

Barrett WAM Client Library and Examples
C
1
star
62

ada_teleoperation

Python
1
star
63

simtrack_msgs

Messages for the simtrack tracking package.
CMake
1
star
64

ada_description

URDF and SRDF descriptions of ADA
Shell
1
star
65

face_detection

Repository for detecting facial features and head pose
C++
1
star
66

homebrew-tap

🍺 Homebrew tap for Personal Robotics Laboratory software
Ruby
1
star
67

ada_mouth

CMake
1
star
68

pytorch_pix2food

Generative adversarial network that predicts the position and shape of continuous food after pushing action.
Jupyter Notebook
1
star
69

or_plugin

Utility library for creating OpenRAVE plugins.
C++
1
star
70

ada

Software for ADA, the Assistive Dextrous Arm developed by the Personal Robotics Lab at CMU.
Python
1
star
71

ros_control_client

Python and C++ libraries for commanding ros_control
Python
1
star
72

conban_spanet

Linear contextual bandit based on SPANet
Python
1
star
73

feeding_web_interface

Web interface for the robot-assisted feeding system
JavaScript
1
star
74

ork_renderer

C++
1
star
75

ada_meal_scenario

A set of scripts for a meal serving scenario using Ada.
Python
1
star
76

forque_sensor_hardware

Reads and manages the Force/Torque sensor of the Forque
C++
1
star
77

openrave_catkin

Utilities for building OpenRAVE plugins in a Catkin workspace.
CMake
1
star
78

mouse_as_joystick

Wrapper to read from a mouse (e.g. bluetooth mouse) and publish ros messages
Python
1
star
79

or_octomap

or_octomap is a collision checker and sensor system plugin for OpenRAVE, intended to allow OpenRAVE meshes to be collision checked against octrees.
C++
1
star