• Stars
    star
    1,750
  • Rank 25,499 (Top 0.6 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 1 year ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Metric depth estimation from a single image

ZoeDepth: Combining relative and metric depth (Official implementation)

Open In Collab Open in Spaces

License: MIT PyTorch PWC

ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth

Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, Matthias Müller

[Paper]

teaser

Table of Contents

Usage

It is recommended to fetch the latest MiDaS repo via torch hub before proceeding:

import torch

torch.hub.help("intel-isl/MiDaS", "DPT_BEiT_L_384", force_reload=True)  # Triggers fresh download of MiDaS repo

ZoeDepth models

Using torch hub

import torch

repo = "isl-org/ZoeDepth"
# Zoe_N
model_zoe_n = torch.hub.load(repo, "ZoeD_N", pretrained=True)

# Zoe_K
model_zoe_k = torch.hub.load(repo, "ZoeD_K", pretrained=True)

# Zoe_NK
model_zoe_nk = torch.hub.load(repo, "ZoeD_NK", pretrained=True)

Using local copy

Clone this repo:

git clone https://github.com/isl-org/ZoeDepth.git && cd ZoeDepth

Using local torch hub

You can use local source for torch hub to load the ZoeDepth models, for example:

import torch

# Zoe_N
model_zoe_n = torch.hub.load(".", "ZoeD_N", source="local", pretrained=True)

or load the models manually

from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

# ZoeD_N
conf = get_config("zoedepth", "infer")
model_zoe_n = build_model(conf)

# ZoeD_K
conf = get_config("zoedepth", "infer", config_version="kitti")
model_zoe_k = build_model(conf)

# ZoeD_NK
conf = get_config("zoedepth_nk", "infer")
model_zoe_nk = build_model(conf)

Using ZoeD models to predict depth

##### sample prediction
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
zoe = model_zoe_n.to(DEVICE)


# Local file
from PIL import Image
image = Image.open("/path/to/image.jpg").convert("RGB")  # load
depth_numpy = zoe.infer_pil(image)  # as numpy

depth_pil = zoe.infer_pil(image, output_type="pil")  # as 16-bit PIL Image

depth_tensor = zoe.infer_pil(image, output_type="tensor")  # as torch tensor



# Tensor 
from zoedepth.utils.misc import pil_to_batched_tensor
X = pil_to_batched_tensor(image).to(DEVICE)
depth_tensor = zoe.infer(X)



# From URL
from zoedepth.utils.misc import get_image_from_url

# Example URL
URL = "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS4W8H_Nxk_rs3Vje_zj6mglPOH7bnPhQitBH8WkqjlqQVotdtDEG37BsnGofME3_u6lDk&usqp=CAU"


image = get_image_from_url(URL)  # fetch
depth = zoe.infer_pil(image)

# Save raw
from zoedepth.utils.misc import save_raw_16bit
fpath = "/path/to/output.png"
save_raw_16bit(depth, fpath)

# Colorize output
from zoedepth.utils.misc import colorize

colored = colorize(depth)

# save colored output
fpath_colored = "/path/to/output_colored.png"
Image.fromarray(colored).save(fpath_colored)

Environment setup

The project depends on :

  • pytorch (Main framework)
  • timm (Backbone helper for MiDaS)
  • pillow, matplotlib, scipy, h5py, opencv (utilities)

Install environment using environment.yml :

Using mamba (fastest):

mamba env create -n zoe --file environment.yml
mamba activate zoe

Using conda :

conda env create -n zoe --file environment.yml
conda activate zoe

Check if models can be loaded:

python sanity_hub.py

Try a demo prediction pipeline:

python sanity.py

This will save a file pred.png in the root folder, showing RGB and corresponding predicted depth side-by-side.

Model files

Models are defined under models/ folder, with models/<model_name>_<version>.py containing model definitions and models/config_<model_name>.json containing configuration.

Single metric head models (Zoe_N and Zoe_K from the paper) have the common definition and are defined under models/zoedepth while as the multi-headed model (Zoe_NK) is defined under models/zoedepth_nk.

Evaluation

Download the required dataset and change the DATASETS_CONFIG dictionary in utils/config.py accordingly.

Evaluating offical models

On NYU-Depth-v2 for example:

For ZoeD_N:

python evaluate.py -m zoedepth -d nyu

For ZoeD_NK:

python evaluate.py -m zoedepth_nk -d nyu

Evaluating local checkpoint

python evaluate.py -m zoedepth --pretrained_resource="local::/path/to/local/ckpt.pt" -d nyu

Pretrained resources are prefixed with url:: to indicate weights should be fetched from a url, or local:: to indicate path is a local file. Refer to models/model_io.py for details.

The dataset name should match the corresponding key in utils.config.DATASETS_CONFIG .

Training

Download training datasets as per instructions given here. Then for training a single head model on NYU-Depth-v2 :

python train_mono.py -m zoedepth --pretrained_resource=""

For training the Zoe-NK model:

python train_mix.py -m zoedepth_nk --pretrained_resource=""

Gradio demo

We provide a UI demo built using gradio. To get started, install UI requirements:

pip install -r ui/ui_requirements.txt

Then launch the gradio UI:

python -m ui.app

The UI is also hosted on HuggingFace🤗 here

Citation

@misc{https://doi.org/10.48550/arxiv.2302.12288,
  doi = {10.48550/ARXIV.2302.12288},
  
  url = {https://arxiv.org/abs/2302.12288},
  
  author = {Bhat, Shariq Farooq and Birkl, Reiner and Wofk, Diana and Wonka, Peter and Müller, Matthias},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}

More Repositories

1

Open3D

Open3D: A Modern Library for 3D Data Processing
C++
10,396
star
2

MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
Python
4,041
star
3

OpenBot

OpenBot leverages smartphones as brains for low-cost robots. We have designed a small electric vehicle that costs about $50 and serves as a robot body. Our software stack for Android smartphones supports advanced robotics workloads such as person following and real-time autonomous navigation.
Swift
2,679
star
4

DPT

Dense Prediction Transformers
Python
1,794
star
5

Open3D-ML

An extension of Open3D to address 3D Machine Learning tasks
Python
1,644
star
6

PhotorealismEnhancement

Code & Data for Enhancing Photorealism Enhancement
Python
1,237
star
7

MultiObjectiveOptimization

Source code for Neural Information Processing Systems (NeurIPS) 2018 paper "Multi-Task Learning as Multi-Objective Optimization"
Python
753
star
8

lang-seg

Language-Driven Semantic Segmentation
Jupyter Notebook
654
star
9

FastGlobalRegistration

Fast Global Registration
C++
489
star
10

Open3D-PointNet2-Semantic3D

Semantic3D segmentation with Open3D and PointNet++
Python
461
star
11

FreeViewSynthesis

Code repository for "Free View Synthesis", ECCV 2020.
Python
262
star
12

StableViewSynthesis

Python
212
star
13

DeepLagrangianFluids

Code repository for "Lagrangian Fluid Simulation with Continuous Convolutions", ICLR 2020.
Python
187
star
14

spear

SPEAR: A Simulator for Photorealistic Embodied AI Research
C++
173
star
15

DirectFuturePrediction

Code for the paper "Learning to Act by Predicting the Future", Alexey Dosovitskiy and Vladlen Koltun, ICLR 2017
Python
152
star
16

VI-Depth

Code for Monocular Visual-Inertial Depth Estimation (ICRA 2023)
Python
139
star
17

NPHard

Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search
Python
139
star
18

redwood-3dscan

Python
100
star
19

Intseg

Interactive Image Segmentation with Latent Diversity
Python
78
star
20

TanksAndTemples

Toolbox for the TanksAndTemples benchmark website
Python
58
star
21

dcflow

Code for the paper "Accurate Optical Flow via Direct Cost Volume Processing. Jia Xu, René Ranftl, and Vladlen Koltun. CVPR 2017"
C++
52
star
22

adaptive-surface-reconstruction

Adaptive Surface Reconstruction for 3D Data Processing
Python
48
star
23

DFE

Python
43
star
24

open3d-cmake-find-package

Find pre-installed Open3D package in CMake
C++
42
star
25

vision-for-action

Code to accompany "Does computer vision matter for action?"
Python
41
star
26

LMRS

Source code for ICLR 2020 paper: "Learning to Guide Random Search"
Python
39
star
27

open3d_downloads

Hosting Open3D test data for development use
23
star
28

Open3D-3rdparty

C
20
star
29

open3d-cmake-external-project

Use Open3D as a CMake external project
CMake
15
star
30

0shot-object-insertion

Simulation and robot code for contact-rich household object insertion (ICRA 2023).
Python
11
star
31

objects-with-lighting

8
star
32

Open3D-Viewer

C++
7
star
33

generalized-smoothing

Companion code for the ICML 2022 paper "Generalizing Gaussian Smoothing for Random Search"
Python
5
star
34

Open3D-Python-CI

Testing Open3D Python package from PyPI and Conda
4
star
35

MetaLearningTradeoffs

Source code for the NeurIPS 2020 Paper: Modeling and Optimization Trade-off in Meta-learning.
Python
4
star
36

hello-world-docker-action

Dockerfile
1
star
37

mshadow

Forked from https://github.com/dmlc/mshadow
C++
1
star