There are no reviews yet. Be the first to send feedback to the community and the maintainers!
flownet2
FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networkshand3d
Network estimating 3D Handpose from single color imagesdemon
DeMoN: Depth and Motion Networkfreihand
A dataset for estimation of hand pose and shape from single color images.deeptam
DeepTAM: Deep Tracking and Mapping https://lmb.informatik.uni-freiburg.de/people/zhouh/deeptam/mv3d
Multi-view 3D Models from Single Images with a Convolutional Networkrgbd-pose3d
3D Human Pose Estimation in RGBD Images for Robotic Task Learningflownet2-docker
Dockerfile and runscripts for FlowNet 2.0 (estimation of optical flow)netdef_models
Repository for different network models related to flow/disparity (ECCV 18)ogn
Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputsorion
ORION: Orientation-boosted Voxel Nets for 3D Object Recognitionwhat3d
What Do Single-view 3D Reconstruction Networks Learn?dispnet-flownet-docker
Dockerfile and runscripts for DispNet and FlowNet1 (estimation of disparity and optical flow)Unet-Segmentation
The U-Net Segmentation plugin for Fiji (ImageJ)robustmvd
Repository for the Robust Multi-View Depth Benchmarkcontra-hand
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'Multimodal-Future-Prediction
The official repository for the CVPR 2019 paper "Overcoming Limitations of Mixture Density Networks: A Sampling and Fitting Framework for Multimodal Future Prediction"lmbspecialops
A collection of tensorflow opsFLN-EPN-RPN
This repository contains the source code of the CVPR 2020 paper: "Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View with a Reachability Prior"flow_rl
netdef-docker
DispNet3, FlowNet3, FlowNetH, SceneFlowNet -- in Dockercaffe-unet-docker
The U-Net Segmentation server (caffe_unet) for DockerContrastive-Future-Trajectory-Prediction
The official repository of the ICCV paper "On Exposing the Challenging Long Tail in Future Prediction of Traffic Actors"locov
Localized Vision-Language Matching for Open-vocabulary Object Detectionunsup-car-dataset
Unsupervised Generation of a Viewpoint Annotated Car Dataset from VideosFreiPose-docker
FreiPose: A Deep Learning Framework for Precise Animal Motion Capture in 3D Spacesoptical-flow-2d-data-generation
Caffe(v1)-compatible codebase to generate optical flow training data on-the-fly; used for the IJCV 2018 paper "What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation?" (http://dx.doi.org/10.1007/s11263-018-1082-6)cv-exercises
spr-exercises
td-or-not-td
Code for the paper "TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning", Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun and Thomas Brox, ICLR 2018sf2se3
Repository for SF2SE3: Clustering Scene Flow into SE(3)-Motions via Proposal and Selectionovqa
understanding_flow_robustness
Official repository for "Towards Understanding Adversarial Robustness of Optical Flow Networks" (CVPR 2022)neural-point-cloud-diffusion
Official repository for "Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation"ldce
Official repository for "Latent Diffusion Counterfactual Explanations"PreFAct
Code and Models for the paper "Learning Representations for Predicting Future Activities"ROS-packages
A collection of ROS packages for LMB software; DispNet(1+3), FlowNet2, etc.FreiPose
diffusion-for-ood
Official repository for "Diffusion for Out-of-Distribution Detection on Road Scenes and Beyond". Coming soon.tfutils
tfutils is a set of tools for training networks with tensorflowFreiCalib
netdef_slim
A python wrapper for tf to ease creation of network definitions.iRoCS-Toolbox
n-D Image Analysis libraries and toolsrohl
RecordTool
tree-planting
Official repository for "Climate-sensitive Urban Planning Through Optimization of Tree Placements"ade-ood
Official repo for the ADE-OoD benchmark.Love Open Source and this site? Check out how you can help us