• Stars
    star
    207
  • Rank 189,769 (Top 4 %)
  • Language
    Python
  • License
    Other
  • Created over 4 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code repository for "Lagrangian Fluid Simulation with Continuous Convolutions", ICLR 2020.

Lagrangian Fluid Simulation with Continuous Convolutions

PyTorch badge TensorFlow badge

This repository contains code for our ICLR 2020 paper. We show how to train particle-based fluid simulation networks as CNNs using continuous convolutions. The code allows you to generate data, train your own model or just run a pretrained model.

canyon video

Please cite our paper (pdf) if you find this code useful:

@inproceedings{Ummenhofer2020Lagrangian,
        title     = {Lagrangian Fluid Simulation with Continuous Convolutions},
        author    = {Benjamin Ummenhofer and Lukas Prantl and Nils Thuerey and Vladlen Koltun},
        booktitle = {International Conference on Learning Representations},
        year      = {2020},
}

To stay informed about updates we recommend to watch this repository.

Dependencies

The versions match the configuration that we have tested on a system with Ubuntu 18.04. SPlisHSPlasH 2.4.0 is required for generating training data (ensure that it is compiled in Release mode). We recommend to use the latest versions for all other packages.

Installing Open3D 0.11 and later with pip

The ML module is included in Open3D 0.11 and later and can simply be installed with

pip install open3d

Make sure that the version of your ML framework matches the version for which the ML ops in Open3D have been built. For Open3D 0.11 this is CUDA 10.1, TensorFlow 2.3 and PyTorch 1.6. If you cannot match this configuration it is recommended to build Open3D from source.

Building Open3D with ML module from source.

At the moment Open3D needs to be build from source to make the code in this repo work. To build Open3D with the ML ops for Tensorflow and PyTorch do the following

git clone --recursive https://github.com/intel-isl/Open3D.git
# check the file Open3D/util/scripts/install-deps-ubuntu.sh
# for dependencies and install them. For more instructions see the Open3D documentation

mkdir Open3D/build
cd Open3D/build

# This builds the ml ops for both TensorFlow and PyTorch.
# If you don't need both frameworks you can disable the one you don't need with OFF.
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TENSORFLOW_OPS=ON -DBUILD_PYTORCH_OPS=ON -DBUILD_CUDA_MODULE=ON -DGLIBCXX_USE_CXX11_ABI=OFF
make install-pip-package

Running the pretrained model

The pretrained network weights are in scripts/pretrained_model_weights.h5 for TensorFlow and in scripts/pretrained_model_weights.pt for PyTorch. The following code runs the network on the example scene

cd scripts
# with TensorFlow
./run_network.py --weights pretrained_model_weights.h5 \
                 --scene example_scene.json \
                 --output example_out \
                 --write-ply \
                 train_network_tf.py
# or with PyTorch
./run_network.py --weights pretrained_model_weights.pt \
                 --scene example_scene.json \
                 --output example_out \
                 --write-ply \
                 train_network_torch.py

The script writes point clouds with the particle positions as .ply files, which can be visualized with Open3D. Note that SPlisHSPlasH is required for sampling the initial fluid volumes from .obj files.

Training the network

Data generation

The data generation scripts are in the datasets subfolder. To generate the training and validation data

  1. Set the path to the DynamicBoundarySimulator of SPlisHSPlasH in the datasets/splishsplash_config.py script.
  2. Run the script from within the datasets folder
    cd datasets
    ./create_data.sh

Data download

If you want to skip the data generation step you can download training and validation data from the links below.

default data 34GB link
DPI dam break 24GB link
6k box data 23GB link

For the default data the training set has been generated with the scripts in this repository and the validation data corresponds to the data used in the paper.

The DPI dam break data has been generated with the code from the DPI-Nets repo. Note that the data has been scaled to match the particle radius used for our method. See the scripts/dambreak.yaml config file for more information on the scale factor.

The 6k box data is a simplified version of the default data with a constant number of particles and always uses a simple box as environment.

Training scripts

To train the model with the generated data simply run one of the train_network_x.py scripts from within the scripts folder.

cd scripts
# TensorFlow version
./train_network_tf.py default.yaml
# PyTorch version
./train_network_torch.py default.yaml

The scripts will create a folder train_network_tf_default or train_network_torch_default respectively with snapshots and log files. The log files can be viewed with Tensorboard.

Evaluating the network

To evaluate the network run the scripts/evaluate_network.py script like this

./evaluate_network.py --trainscript train_network_tf.py --cfg default.yaml
# or
./evaluate_network.py --trainscript train_network_torch.py --cfg default.yaml

This will create the file train_network_{tf,torch}_default_eval_50000.json, which contains the individual errors between frame pairs.

The script will also print the overall errors. The output should look like this if you use the generated the data: {'err_n1': 0.000859004137852537, 'err_n2': 0.0024183266885233934, 'whole_seq_err': 0.030323669719872864}

Note that the numbers differ from the numbers in the paper due to changes in the data generation:

  • We use Open3D to sample surface points to avoid shipping a modified SPlisHSPlasH
  • The sequence of pseudorandom numbers used in the data generation is different, which results in different scenes for training and testing.

If you have downloaded the validation data then the output should be similar to the numbers in the paper. {'err_n1': 0.000665973493194656, 'err_n2': 0.0018649007299291042, 'whole_seq_err': 0.03081335372162257}

Rendering

See the scenes directory for instructions on how to create and render the example scenes like the canyon.

Licenses

Code and scripts are under the MIT license.

Data files in datasets/models and scripts/pretrained_model_weights.{h5,pt} are under the CDLA-Permissive-1.0 license.

More Repositories

1

Open3D

Open3D: A Modern Library for 3D Data Processing
C++
11,405
star
2

MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
Python
4,418
star
3

OpenBot

OpenBot leverages smartphones as brains for low-cost robots. We have designed a small electric vehicle that costs about $50 and serves as a robot body. Our software stack for Android smartphones supports advanced robotics workloads such as person following and real-time autonomous navigation.
Swift
2,819
star
4

ZoeDepth

Metric depth estimation from a single image
Jupyter Notebook
2,167
star
5

DPT

Dense Prediction Transformers
Python
1,866
star
6

Open3D-ML

An extension of Open3D to address 3D Machine Learning tasks
Python
1,821
star
7

PhotorealismEnhancement

Code & Data for Enhancing Photorealism Enhancement
Python
1,237
star
8

MultiObjectiveOptimization

Source code for Neural Information Processing Systems (NeurIPS) 2018 paper "Multi-Task Learning as Multi-Objective Optimization"
Python
753
star
9

lang-seg

Language-Driven Semantic Segmentation
Jupyter Notebook
704
star
10

FastGlobalRegistration

Fast Global Registration
C++
500
star
11

Open3D-PointNet2-Semantic3D

Semantic3D segmentation with Open3D and PointNet++
Python
461
star
12

FreeViewSynthesis

Code repository for "Free View Synthesis", ECCV 2020.
Python
262
star
13

spear

SPEAR: A Simulator for Photorealistic Embodied AI Research
C++
219
star
14

StableViewSynthesis

Python
212
star
15

DirectFuturePrediction

Code for the paper "Learning to Act by Predicting the Future", Alexey Dosovitskiy and Vladlen Koltun, ICLR 2017
Python
152
star
16

VI-Depth

Code for Monocular Visual-Inertial Depth Estimation (ICRA 2023)
Python
147
star
17

NPHard

Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search
Python
139
star
18

redwood-3dscan

Python
100
star
19

Intseg

Interactive Image Segmentation with Latent Diversity
Python
78
star
20

TanksAndTemples

Toolbox for the TanksAndTemples benchmark website
Python
58
star
21

dcflow

Code for the paper "Accurate Optical Flow via Direct Cost Volume Processing. Jia Xu, Renรฉ Ranftl, and Vladlen Koltun. CVPR 2017"
C++
52
star
22

adaptive-surface-reconstruction

Adaptive Surface Reconstruction for 3D Data Processing
Python
49
star
23

open3d-cmake-find-package

Find pre-installed Open3D package in CMake
C++
48
star
24

DFE

Python
43
star
25

vision-for-action

Code to accompany "Does computer vision matter for action?"
Python
41
star
26

LMRS

Source code for ICLR 2020 paper: "Learning to Guide Random Search"
Python
39
star
27

objects-with-lighting

Repository for the Objects With Lighting Dataset
Python
36
star
28

open3d_downloads

Hosting Open3D test data for development use
23
star
29

Open3D-3rdparty

C
20
star
30

open3d-cmake-external-project

Use Open3D as a CMake external project
CMake
18
star
31

0shot-object-insertion

Simulation and robot code for contact-rich household object insertion (ICRA 2023).
Python
11
star
32

Open3D-Viewer

C++
7
star
33

generalized-smoothing

Companion code for the ICML 2022 paper "Generalizing Gaussian Smoothing for Random Search"
Python
7
star
34

Open3D-Python-CI

Testing Open3D Python package from PyPI and Conda
4
star
35

MetaLearningTradeoffs

Source code for the NeurIPS 2020 Paper: Modeling and Optimization Trade-off in Meta-learning.
Python
4
star
36

hello-world-docker-action

Dockerfile
1
star
37

mshadow

Forked from https://github.com/dmlc/mshadow
C++
1
star