• Stars
    star
    487
  • Rank 90,352 (Top 2 %)
  • Language
    Python
  • Created over 6 years ago
  • Updated almost 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A multi-sensor capture system for free viewpoint video.

A Portable, Flexible and Facile Volumetric Capture System

Moving beyond green screens as well as stationary, expensive and hard to use setups

Project Page Conference Paper

Project Page Conference Paper Conference Paper

Project Page Journal Paper

Project Page Journal Abstract


Volumetric Capture Banner


Documentation

Documentation

Updated documentation with assembly instructions, installation guides, examples and more are now available at the project's page: https://vcl3d.github.io/VolumetricCapture/.

As volumetric capture requires the deployment of a complex system spanning multiple hardware and distributed software, please refer to the online documentation first, and then the closed issues as most problems would have been addressed there.

News

The latest release supporting both Kinect 4 Azure and Intel RealSense 2.0 D415 is now available for download with various fixes and feedback integrated. It comes with an improved multi-sensor calibration that allows for greater flexibility in terms of sensor numbers and placement and higher accuracy. More information can be found here [8].


Overview

This repository contains VCL's evolving toolset for volumetric (multi-RGB-D sensor) capturing and recording, initially presented in [1]. It is a research oriented, but flexible and optimized, software with integrated multi-sensor alignment research results ([6], [8]), that can be / has been used in the context of:

  • Live Tele-presence [2] in Augmented VR or Mixed/Augmented Reality settings
  • Performance Capture [3]
  • Free Viewpoint Video (FVV)
  • Immersive Applications (i.e. events and/or gaming) [4]
  • Motion Capture [5]
  • Post-production [9]
  • Data Collection [7], [10]

Design

The toolset is designed as a distributed system where a number of processing units each manage and collect data from a single sensor using a headless application. A set of sensors is orchestrated by a centralized UI application that is also the delivery point of the connected sensor streams. Communication is handled by a broker, typically co-hosted with the controlling application, although not necessary.

Sensors

We now support both (and mixed !) Intel RealSense D415 and Azure Kinect DK sensors.

Intel RealSense D415 Microsoft Kinect Azure
Intel RealSense D415 Azure Kinect DK

Highlights

  • Multi-sensor streaming and recording
  • Quick and easy volumetric sensor alignment
  • Hardware and software (IEEE 1588 PTP) synchronization

Intro

Download

Check our latest releases.

Citation

If you used the system or found this work useful, please cite:

@inproceedings{sterzentsenko2018low,
  title={A low-cost, flexible and portable volumetric capturing system},
  author={Sterzentsenko, Vladimiros and Karakottas, Antonis and Papachristou, Alexandros and Zioulis, Nikolaos and Doumanoglou, Alexandros and Zarpalas, Dimitrios and Daras, Petros},
  booktitle={2018 14th International Conference on Signal-Image Technology \& Internet-Based Systems (SITIS)},
  pages={200--207},
  year={2018},
  organization={IEEE}
}

Caveats

We currently only ship binaries for the Windows platform, supporting Windows 10.

References

[1] Sterzentsenko, V., Karakottas, A., Papachristou, A., Zioulis, N., Doumanoglou, A., Zarpalas, D. and Daras, P., 2018, November. A low-cost, flexible and portable volumetric capturing system. In 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) (pp. 200-207). IEEE.

[2] Alexiadis, D.S., Chatzitofis, A., Zioulis, N., Zoidi, O., Louizis, G., Zarpalas, D. and Daras, P., 2016. An integrated platform for live 3D human reconstruction and motion capturing. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 27(4), pp.798-813.

[3] Alexiadis, D.S., Zioulis, N., Zarpalas, D. and Daras, P., 2018. Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors. Pattern Recognition (PR), 79, pp.260-278.

[4] Zioulis, N., Alexiadis, D., Doumanoglou, A., Louizis, G., Apostolakis, K., Zarpalas, D. and Daras, P., 2016, September. 3D tele-immersion platform for interactive immersive experiences between remote users. In 2016 IEEE International Conference on Image Processing (ICIP) (pp. 365-369). IEEE.

[5] Chatzitofis, A., Zarpalas, D., Kollias, S. and Daras, P., 2019. DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors. Sensors, 19(2), p.282.

[6] Papachristou, A., Zioulis, N., Zarpalas, D., and Daras, P., 2018. Markerless structure-based multi-sensor calibration for free viewpoint video capture, International Conference on Computer Graphics, Visualization and Computer Vision (WSCG).

[7] Sterzentsenko V., Saroglou L., Chatzitofis A., Thermos S., Zioulis N., Doumanoglou A., Zarpalas D., Daras P., 2019. Self-Supervised Deep Depth Denoising, International Conference on Computer Vision (ICCV)

[8] Sterzentsenko V., Doumanoglou, A., Thermos S., Zioulis N., Zarpalas D., Daras P., 2020. Deep Soft Procrustes for Markerless Volumetric Sensor Alignment, IEEE Conference on Virtual Reality and 3D User Interfaces (VR)

[9] Karakottas, A., Zioulis, N., Doumanglou, A., Sterzentsenko, V., Gkitsas, V., Zarpalas, D. and Daras, P., 2020, July. XR360: A Toolkit for Mixed 360 and 3d Productions. In 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) (pp. 1-6). IEEE.

[10] Chatzitofis A., Saroglou, L., Boutis P., Drakoulis P., Zioulis N., Subramanyam S., Kevelham B., Charbonnier C., Cesar P., Zarpalas D., Kollias S., Daras P., 2020. HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media, IEEE Access Journal

More Repositories

1

DeepDepthDenoising

This repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf (ICCV19)
Python
133
star
2

SphericalViewSynthesis

Code accompanying the paper "Spherical View Synthesis for Self-Supervised 360 Depth Estimation", 3DV 2019
Python
113
star
3

3D60

Tools accompanying the 3D60 spherical panoramas dataset
Python
103
star
4

Pano3D

Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.
Python
81
star
5

DeepPanoramaLighting

Deep Lighting Environment Map Estimation from Spherical Panoramas (CVPRW20)
Python
67
star
6

StructureNet

Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
Python
43
star
7

PanoDR

Code and models for "PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes" presented at the OmniCV workshop of CVPR21.
Python
37
star
8

SingleShotCuboids

Code accompanying the paper "Single-Shot Cuboids: Geodesics-based End-to-end Manhattan Aligned Layout Estimation from Spherical Panoramas".
Python
26
star
9

BlenderScripts

Scripts for data generation using Blender and 3D datasets like Matterport3D.
Python
24
star
10

DronePose

Code for DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose Estimation via a Smooth Silhouette Loss (ECCVW 2020)
Python
22
star
11

HyperSphereSurfaceRegression

Code accompanying the paper "360 Surface Regression with a Hyper-Sphere Loss", 3DV 2019
Python
17
star
12

UAVA

A multimodal UAV assistant dataset.
Python
9
star
13

360Vision

360 Tools
Python
8
star
14

AVoidX

AVoidX: An Augmented VR Game
C#
5
star
15

ExplicitLayoutDepth

Repo accompanying the paper "Monocular spherical depth estimation with explicitly connected weak layout cues".
3
star
16

HybridSkip

Code accompanying the paper: "Hybrid Skip: A Biologically Inspired Skip Connection for the UNet Architecture"
2
star
17

fast_precise_hippocampus_segmentation_cnn

MATLAB
2
star
18

CMUDRN

CMU-DRN
2
star
19

SynthRSF

Novel Photorealistic Synthetic Dataset for Adverse Weather Condition Denoising
1
star
20

vcl3d.github.io

VCLs (Visual Computing Lab, vcl.iti.gr) 3D vision team projects page.
HTML
1
star
21

360Fusion

1
star