Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Go to file
HRN 63bd9c9993 the need to provide neighborLength in domain dictionary is lifted. Now it is optional 2025-02-03 23:49:11 +03:30
.github/workflows Create static.yml 2023-03-25 22:04:16 +03:30
DEMSystems the need to provide neighborLength in domain dictionary is lifted. Now it is optional 2025-02-03 23:49:11 +03:30
benchmarks updated benchmarks after multigridNBS 2022-10-30 19:06:52 +03:30
cmake autoCompelte for time folders and field names 2025-02-03 19:15:08 +03:30
doc doc for integration 2023-04-23 12:47:12 -07:00
solvers the need to provide neighborLength in domain dictionary is lifted. Now it is optional 2025-02-03 23:49:11 +03:30
src the need to provide neighborLength in domain dictionary is lifted. Now it is optional 2025-02-03 23:49:11 +03:30
tutorials updated V1.0 V-blender 2024-12-28 14:58:28 +03:30
utilities the need to provide neighborLength in domain dictionary is lifted. Now it is optional 2025-02-03 23:49:11 +03:30
.gitignore git ignore and CMAKE 2024-04-18 10:06:07 -07:00
CMakeLists.txt bug fixes for build with float in cuda 2025-01-20 15:43:56 +03:30
LICENSE build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30
README.md Update README.md 2024-03-20 11:48:25 +03:30
phasicFlowConfig.H.in updated cmake for version-1.0 and including MPI 2023-09-27 11:28:39 +03:30

README.md

PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 80M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.

MPI parallelization with dynamic load balancing is under development. With this level of parallelization, PhasicFlow can leverage the computational power of multi-gpu workstations or clusters with distributed memory CPUs. In summary PhasicFlow can have 6 execution modes:

  1. Serial on a single CPU core,
  2. Parallel on a multi-core computer/node (using OpenMP),
  3. Parallel on an nvidia-GPU (using Cuda),
  4. Parallel on distributed memory workstation (Using MPI)
  5. Parallel on distributed memory workstations with multi-core nodes (using MPI+OpenMP)
  6. Parallel on workstations with multiple GPUs (using MPI+Cuda).

How to build?

You can build PhasicFlow for CPU and GPU executions. The latest release of PhasicFlow is v-0.1. Here is a complete step-by-step procedure for building phasicFlow-v-0.1..

Online code documentation

You can find a full documentation of the code, its features, and other related materials on online documentation of the code

How to use PhasicFlow?

You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.

PhasicFlowPlus

PhasicFlowPlus is and extension to PhasicFlow for simulating particle-fluid systems using resolved and unresolved CFD-DEM. See the repository of this package.

Supporting packages

  • Kokkos from National Technology & Engineering Solutions of Sandia, LLC (NTESS)
  • CLI11 1.8 from University of Cincinnati.

How to cite PhasicFlow

If you are using PhasicFlow in your research or industrial work, cite the following article:

@article{NOROUZI2023108821,
title = {PhasicFlow: A parallel, multi-architecture open-source code for DEM simulations},
journal = {Computer Physics Communications},
volume = {291},
pages = {108821},
year = {2023},
issn = {0010-4655},
doi = {https://doi.org/10.1016/j.cpc.2023.108821},
url = {https://www.sciencedirect.com/science/article/pii/S0010465523001662},
author = {H.R. Norouzi},
keywords = {Discrete element method, Parallel computing, CUDA, GPU, OpenMP, Granular flow}
}