• Stars
    star
    155
  • Rank 240,864 (Top 5 %)
  • Language
    C++
  • License
    Apache License 2.0
  • Created over 7 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Algorithm-agnostic computer vision message types for ROS.

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D and 3D. The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Source data that generated a classification or detection are not a part of the messages. If you need to access them, use an exact or approximate time synchronizer in your code, as the message's header should match the header of the source data.

Semantic segmentation pipelines should use sensor_msgs/Image messages for publishing segmentation and confidence masks. This allows systems to use standard ROS tools for image processing, and allows choosing the most compact image encoding appropriate for the task. To transmit the metadata associated with the vision pipeline, you should use the /vision_msgs/LabelInfo message. This message works the same as /sensor_msgs/CameraInfo or /vision_msgs/VisionInfo:

  1. Publish LabelInfo to a topic. The topic should be at same namespace level as the associated image. That is, if your image is published at /my_segmentation_node/image, the LabelInfo should be published at /my_segmentation_node/label_info. Use a latched publisher for LabelInfo, so that new nodes joining the ROS system can get the messages that were published since the beginning. In ROS2, this can be achieved using a transient local QoS profile.

  2. The subscribing node can get and store one LabelInfo message and cancel its subscription after that. This assumes the provider of the message publishes it periodically.

Messages

  • Classification: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.
  • XArray messages, where X is one of the message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An class_id/score pair.
  • ObjectHypothesisWithPose: An ObjectHypothesis/pose pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

RVIZ Plugins

The second package enables the visualisation of different detectors in RVIZ2. For more information about the capabilities, please see the README file.

Bounding Box Array

References

More Repositories

1

image_pipeline

An image processing pipeline for ROS.
C++
789
star
2

slam_gmapping

http://www.ros.org/wiki/slam_gmapping
C++
653
star
3

vision_opencv

C++
539
star
4

perception_pcl

PCL (Point Cloud Library) ROS interface stack
C++
417
star
5

pointcloud_to_laserscan

Converts a 3D Point Cloud into a 2D laser scan.
C++
399
star
6

depthimage_to_laserscan

Converts a depth image to a laser scan for use with navigation and localization.
C++
246
star
7

openslam_gmapping

C++
217
star
8

laser_filters

Assorted filters designed to operate on 2D planar laser scanners, which use the sensor_msgs/LaserScan type.
C++
166
star
9

perception_open3d

Open3D analog to perception_pcl, containing conversion functions from Open3D to/from ROS types
C++
158
star
10

laser_geometry

Provides the LaserProjection class for turning laser scan data into point clouds.
C++
153
star
11

slam_karto

ROS Wrapper and Node for OpenKarto
C++
147
star
12

open_karto

Catkinized ROS Package of the OpenKarto Library (LGPL3)
C++
131
star
13

image_common

Common code for working with images in ROS
C++
124
star
14

imu_pipeline

Transforms sensor_msgs/Imu messages into new coordinate frames using tf
C++
97
star
15

point_cloud_transport

Point Cloud Compression for ROS
C++
75
star
16

opencv_apps

http://wiki.ros.org/opencv_apps
C++
64
star
17

sparse_bundle_adjustment

Sparse Bundle Adjustment Library (used by slam_karto)
C++
59
star
18

image_transport_plugins

A set of plugins for publishing and subscribing to sensor_msgs/Image topics in representations other than raw pixel data.
C++
57
star
19

image_transport_tutorials

ROS 2 tutorials for image_transport.
C++
50
star
20

laser_assembler

Provides nodes to assemble point clouds from either LaserScan or PointCloud messages
C++
41
star
21

calibration

Provides a toolchain to calibrate sensors and URDF robot models.
Python
31
star
22

radar_msgs

A set of standard messages for RADARs in ROS
CMake
25
star
23

graft

UKF replacement for robot_pose_efk (still in development)
C++
11
star
24

pcl_conversions

[deprecated] pcl_conversions has moved to https://github.com/ros-perception/perception_pcl
C++
10
star
25

camera_pose

Python
9
star
26

laser_proc

Converts representations of sensor_msgs/LaserScan and sensor_msgs/MultiEchoLaserScan
C++
8
star
27

pcl_msgs

ROS package containing PCL-related messages
CMake
8
star
28

laser_pipeline

Meta-package for laser_assembler, laser_filters, and laser_geometry.
CMake
5
star
29

megatree

C++
4
star
30

slam_gmapping_test_data

Repo to contain data (like ROS bags) to use for testing and gmapping
3
star