• Stars
    star
    161
  • Rank 233,470 (Top 5 %)
  • Language
    C++
  • License
    Other
  • Created over 3 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

In-situ data analyses and machine learning with OpenFOAM and Python

PythonFOAM:

In-situ data analyses with OpenFOAM and Python

Using Python modules for in-situ data analytics with OpenFOAM. NOTE that this is NOT PyFOAM which is an automation tool for running OpenFOAM cases. What you see in this repository, is OpenFOAM calling Python functions and classes for in-situ data analytics. You may offload some portion of your compute task to Python for a variety of reasons (chiefly data-driven tasks using the Python ML ecosystem and quick prototyping of algorithms).

OpenFOAM versions that should compile without changes:

  • openfoam.com versions: v2012, v2106
  • openfoam.org versions: 8

You can find an extensive hands-on tutorial, courtesy of the ALCF PythonFOAM workshop, here: https://www.youtube.com/watch?v=-Sa2OEssru8

Prerequisites

  • OpenFOAM
  • numpy (python) with devel headers
  • tensorflow (python) ### Version 2.1
  • matplotlib (python)
  • python-dev-tools (python)

Contents

  1. Solver_Examples/

    1. PODFoam/: A pimpleFoam solver with in-situ collection of snapshot data for a streaming singular value decomposition. Python bindings are used to utilize a Python Streaming-SVD class object from OpenFOAM.

    2. APMOSFoam/: A pimpleFoam solver with in-situ collection of snapshot data for a parallelized singular value decomposition. While the previous example performs the SVD on data only on one rank - this solver performs a global, but distributed, SVD. However, SVD updates are not streaming.

    3. AEFoam/: A pimpleFoam solver with in-situ collection of snapshot data for training a deep learning autoencoder.

  2. Turbulence_Model_Examples/ (Work in progress) See detailed README.md in this folder.

To compile and run

Inspect prep_env.sh to set paths to various Python, numpy headers and libraries and to source your OpenFOAM 8 installation. Replace these with the include/lib paths to your personal Python environments. The Python module within Run_Case/ directories of different Solvers/ require the use of numpy, matplotlib, and tensorflow so ensure that your environment has these installed. The best way to obtain these is to pip install tensorflow==2.1 which will automatically find the right numpy dependency and then pip install matplotlib to obtain plot capability. You will also need to install mpi4py which you can using pip install mpi4py.

  1. Solvers: After running source prep_env.sh, to run the solver examples go into the respective folder (for example PODFoam/) and use wclean && wmake to build your model. Run your solver example from Run_Case/. Note the presence of python_module.py within Run_Case/.

  2. Turbulence model examples: See README.md in Turbulence_Model_Examples/.

Update - 12/16/2022

You can now debug the C++ components of PythonFOAM with visual studio code. For this you need to have OpenFOAM-8 built in debug mode. Here is a quick tutorial to do so:

  1. Download OpenFOAM-8 source
git clone https://github.com/OpenFOAM/OpenFOAM-8.git
git clone https://github.com/OpenFOAM/ThirdParty-8.git

Go to line 84 in OpenFOAM-8/etc/bashrc and

export WM_COMPILE_OPTION=Debug

then use source OpenFOAM-8/etc/bashrc to load environment variables. After this step, go to ThirdParty-8/ and use ./Allwmake. After - go to OpenFOAM-8/ and use ./Allwmake -j. (Note we are skipping Paraview compilation). We recommend keeping one build of debug OpenFOAM and one build of optimized OpenFOAM on your system at all times.

  1. Download Visual studio and make sure your visual studio has C/C++ (intellisense and extension pack) extensions.

  2. Navigate to your solver build directory - here let us use PODFoam_Debug/ as an example. This folder has the files and wmake instructions to build PODFoam_Debug - you will note that the folder also shares the directories required to run a CFD case (i.e., the contents of run_case/ are in the same build directory). This is required for debug mode execution of our solver.

  3. Create a new hidden folder in the PODFoam_Debug directory called .vscode/. In it create 4 files

launch.json
c_cpp_properties.json
tasks.json
settings.json

Use the files in PODFoam_Debug/.vscode in this repository to add file contents (further information here: https://github.com/Rvadrabade/Debugging-OpenFOAM-with-Visual-Studio-Code/).

  1. In a new terminal - source prep_env.sh -debug to ensure that you are running with the debug version of OpenFOAM. Note that here you have to make sure you are pointing to your correct bashrc. The links in this example are for my personal machine. Follow previous steps to compile a debug version of PODFoam_Debug from PODFoam_Debug/. There should be no issues here.

  2. Navigate to PODFoam_Debug/ and run visual studio code with code .. Set a breakpoint in PODFoam_Debug.C and hit F5 in the debug panel to initialize debugging. Standard gdb rules apply hereon.

Note we are still investigating mixed-mode debugging for C++ and Python.

Docker

A Docker container with the contents of this repo is available here. You can use

docker pull romitmaulik1/pythonfoam_docker:latest

on a machine with docker in it to download an image that has PythonFOAM set up on it. Subsequently

docker run -t -d -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --privileged --name pythonfoam_container romitmaulik1/pythonfoam_docker
xhost +local:docker # For running GUI applications from docker
docker start pythonfoam_container
docker exec -i -t pythonfoam_container /bin/bash

will create a container (named pythonfoam_container) from the image and start a shell for you to run experiments. Navigate to /home/PythonFOAM within the shell to obtain the source code and test cases. For a quick crash course on using Docker, see this tutorial by Jean Rabault.

Points of contact for further assistance - Romit Maulik ([email protected]). This work was performed by using the resources of the Argonne Leadership Computing Facility, a U.S. Department of Energy (Office of Science) user facility at Argonne National Laboratory, Lemont, IL, USA. Several aspects of this research were also performed at the Department of Computer Science at IIT-Chicago (SPEAR Team with support from NSF Award #2119294-PI Zhiling Lan).

LICENSE

Argonne open source for the Python integration

More Repositories

1

ai-science-training-series

Jupyter Notebook
205
star
2

TensorFlowFoam

Integrating the TensorFlow 1.15 C-API into OpenFOAM 5.0 for data-driven CFD algorithm development
C++
201
star
3

balsam

High throughput workflows and automation for HPC
Python
76
star
4

dlio_benchmark

An I/O benchmark for deep Learning applications
Python
62
star
5

ALCF_Hands_on_HPC_Workshop

The ALCF hosts a regular simulation, data, and learning workshop to help users scale their applications. This repository contains the examples used in the workshop.
HTML
54
star
6

CompPerfWorkshop

ALCF Computational Performance Workshop
Python
34
star
7

ATPESC_MachineLearning

Lecture and hands-on material for Track 8- Machine Learning of Argonne Training Program on Extreme-Scale Computing
LLVM
32
star
8

molan

Data analytics for molecular solids melting points
Jupyter Notebook
26
star
9

active-learning-md

Active learning workflow developed as a part of the upcoming article "Machine Learning Inter-Atomic Potentials Generation Driven by Active Learning: A Case Study for Amorphous and Liquid Hafnium dioxide"
Python
25
star
10

ALCFBeginnersGuide

Cuda
24
star
11

llm-workshop

Jupyter Notebook
23
star
12

AI4ScienceTutorial

A tutorial for students that surveys basic ML techniques in ipython notebook format.
Jupyter Notebook
22
star
13

THAPI

A tracing infrastructure for heterogeneous computing applications.
C
22
star
14

user-guides

ALCF Systems User Documentation
HTML
20
star
15

GettingStarted

Collection of small examples for running on ALCF resources
C
16
star
16

AIaccelerators-SC23-tutorial

AI Accelerators-SC23-tutorial Repository
Jupyter Notebook
11
star
17

HPC-Patterns

Provide examples on how to use GPU with Parallel Programing Paradigm (MPI, OpenMP, SYCL)
C++
10
star
18

AIaccelerators-SC22-tutorial

AI Accelerator Tutorial SC22
Python
8
star
19

alcl

Argonne Leadership Computing Facility OpenCL tutorial
C
8
star
20

alcf-nccl-tests

NCCL tests for ALCF machines
Roff
7
star
21

alcf4_benchmarks

Repository for ALCF-4 Benchmarks as defined in the RFP.
6
star
22

copper

scalable data movement in Exascale Supercomputers
C++
6
star
23

ThetaGPU-Docs

Staging area for Theta-GPU documentation
Python
5
star
24

SimAI-Bench

ALCF benchmarks for coupled simulation and AI workflows
Python
4
star
25

summer-school-2024

4
star
26

pbs_utils

scripts for working with PBS
Shell
4
star
27

SyclCPLX

Sycl complex library header-only
C++
4
star
28

ai-testbed-userdocs

AI-Testbed at ALCF provides an infrastructure for the next-generation of AI-accelerator machines.
HTML
4
star
29

nexus

Scripts and tools for IRI applications at ALCF
Python
4
star
30

checkpoint_restart

This repo is for providing instructions on how to do checkpoint/restart at large scale simulations on exasscale machines
Shell
3
star
31

dlSoftwareTests

Simple tests to verify ML/DL environments on ALCF HPC resources are working correctly.
Python
3
star
32

DL-Profiling-Examples

Example scripts and profiling demonstrations for deep learning models
Python
3
star
33

THAPI-spack

This is a spack environment for THAPI.
Python
3
star
34

AIAccelerators-AE

AD/AE repo for the paper on AI Accelerator Evaluation
Python
3
star
35

IntroSYCLtutorial

Jupyter Notebook
2
star
36

dlio_ml_workloads

Reference workloads for DLIO Benchmark
Python
2
star
37

CCS

CCS (C Configuration Space and Tuning Library) aims at providing interoperability between autotuning frameworks and applications with auto-tuning needs. It does so by providing a C interface to describe autotuning problems and autotuners.
C
2
star
38

autoperf

Core autoperf source
C
2
star
39

polaris-userdocs

Temporary documentation for Polaris resource
HTML
2
star
40

LLM-Inference-Bench

LLM-Inference-Bench
Jupyter Notebook
2
star
41

conda_install_scripts

Some install scripts for miniconda that include tensorflow and keras in the intel channel.
Shell
2
star
42

mlprof

Profiling tools for performance studies of competing ML frameworks on HPC systems
Python
2
star
43

container-registry

Contains images and recipes to run container images on Theta, ThetaGPU, Polaris and Sunspot
Python
2
star
44

dl_scaling

Scaling Deep learning on HPC systems
Python
2
star
45

alcf-osu-benchmarks

OSU benchmarks on ALCF systems
Roff
1
star
46

scalable_conda_env

This is to show how to setup conda environment for large scale runs
Python
1
star
47

storage

storage benchmarks
C
1
star
48

inference-endpoints

This is a repository with examples to run inference endpoints on various ALCF clusters
Jupyter Notebook
1
star
49

tc-ipu

Implementation of triangle counting for Graphcore IPU
Emacs Lisp
1
star
50

balsam-serial-mode-profiling

Contains application tools and scripts for measuring balsam performance in serial mode.
Python
1
star
51

theta_conda_channel

Files for building anaconda cloud packages for Theta.
Shell
1
star
52

docker_image_recipes

A repository for image recipe files useful for ALCF systems.
Dockerfile
1
star
53

babeltrace2-ruby

Babeltrace2 Ruby bindings
Ruby
1
star
54

ALCF-AI-Testbed

HTML
1
star
55

cast-to-yaml

Extract information fom a c ast
Ruby
1
star