Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Go to file
PhasicFlow c5ed2ad1e9
Merge pull request #125 from PhasicFlow/develop
Develop
2024-11-24 19:02:35 +03:30
.github/workflows Create static.yml 2023-03-25 22:04:16 +03:30
DEMSystems bug fix in DEMsystem 2023-04-12 15:07:31 +03:30
benchmarks updated benchmarks after multigridNBS 2022-10-30 19:06:52 +03:30
cmake Update bashrc 2024-11-17 14:47:49 +03:30
doc doc for integration 2023-04-23 12:47:12 -07:00
solvers course graining added 2024-11-17 10:19:40 +03:30
src Merge branch 'main' into develop 2024-11-18 20:40:36 +03:30
tutorials cours graining tutorial added 2024-11-17 14:59:58 +03:30
utilities bug fix for postProcess 2024-11-22 21:33:46 +03:30
.gitignore git ignore and CMAKE 2024-04-18 10:06:07 -07:00
CMakeLists.txt Merge branch 'main' into develop 2024-11-18 20:40:36 +03:30
LICENSE build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30
README.md Update README.md 2024-03-20 11:48:25 +03:30
phasicFlowConfig.H.in updated cmake for version-1.0 and including MPI 2023-09-27 11:28:39 +03:30

README.md

PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 80M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.

MPI parallelization with dynamic load balancing is under development. With this level of parallelization, PhasicFlow can leverage the computational power of multi-gpu workstations or clusters with distributed memory CPUs. In summary PhasicFlow can have 6 execution modes:

  1. Serial on a single CPU core,
  2. Parallel on a multi-core computer/node (using OpenMP),
  3. Parallel on an nvidia-GPU (using Cuda),
  4. Parallel on distributed memory workstation (Using MPI)
  5. Parallel on distributed memory workstations with multi-core nodes (using MPI+OpenMP)
  6. Parallel on workstations with multiple GPUs (using MPI+Cuda).

How to build?

You can build PhasicFlow for CPU and GPU executions. The latest release of PhasicFlow is v-0.1. Here is a complete step-by-step procedure for building phasicFlow-v-0.1..

Online code documentation

You can find a full documentation of the code, its features, and other related materials on online documentation of the code

How to use PhasicFlow?

You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.

PhasicFlowPlus

PhasicFlowPlus is and extension to PhasicFlow for simulating particle-fluid systems using resolved and unresolved CFD-DEM. See the repository of this package.

Supporting packages

  • Kokkos from National Technology & Engineering Solutions of Sandia, LLC (NTESS)
  • CLI11 1.8 from University of Cincinnati.

How to cite PhasicFlow

If you are using PhasicFlow in your research or industrial work, cite the following article:

@article{NOROUZI2023108821,
title = {PhasicFlow: A parallel, multi-architecture open-source code for DEM simulations},
journal = {Computer Physics Communications},
volume = {291},
pages = {108821},
year = {2023},
issn = {0010-4655},
doi = {https://doi.org/10.1016/j.cpc.2023.108821},
url = {https://www.sciencedirect.com/science/article/pii/S0010465523001662},
author = {H.R. Norouzi},
keywords = {Discrete element method, Parallel computing, CUDA, GPU, OpenMP, Granular flow}
}