Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Go to file
hamidrezanorouzi 2b514d0302 first modifications for coupling 2022-12-03 12:12:56 +03:30
DEMSystems first modifications for coupling 2022-12-03 12:12:56 +03:30
benchmarks updated benchmarks after multigridNBS 2022-10-30 19:06:52 +03:30
cmake src folder 2022-09-05 01:56:29 +04:30
doc Create howToBuild.md 2022-09-05 17:15:21 +04:30
solvers first modifications for coupling 2022-12-03 12:12:56 +03:30
src first modifications for coupling 2022-12-03 12:12:56 +03:30
tutorials tutorials updated after multigridNBS 2022-10-30 18:21:02 +03:30
utilities first modifications for coupling 2022-12-03 12:12:56 +03:30
.gitignore positionParticles-ordered modified to accept cylinder&sphere region 2022-09-07 22:22:23 +04:30
CMakeLists.txt change of selctors, cmake config for float compilation 2022-09-30 11:43:19 +03:30
LICENSE build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30
README.md Update README.md 2022-11-19 09:01:11 +03:30
phasicFlowConfig.H.in change of selctors, cmake config for float compilation 2022-09-30 11:43:19 +03:30

README.md

PhasicFlow

PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 32M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.

How to build?

You can build PhasicFlow for CPU and GPU executions. Here is a complete step-by-step procedure.

How to use PhasicFlow?

You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.

Supporting packages

  • Kokkos from National Technology & Engineering Solutions of Sandia, LLC (NTESS)
  • CLI11 1.8 from University of Cincinnati.

Extensions in future

parallelization

  • Extending the code for using OpenMPTarget backend to include more GPUs for off-loading the computations.
  • Extending high-level parallelization and implementing space partitioning and load balancing for muilti-GPU computations and running PhasicFlow on distributed memory super-computers