Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Go to file
PhasicFlow 8eabb89df6
Create static.yml
2023-03-25 22:04:16 +03:30
.github/workflows Create static.yml 2023-03-25 22:04:16 +03:30
DEMSystems bug fix for cudaRun in indexContainer 2023-03-16 21:22:24 +03:30
benchmarks updated benchmarks after multigridNBS 2022-10-30 19:06:52 +03:30
cmake src folder 2022-09-05 01:56:29 +04:30
doc doc2 2023-03-24 00:15:30 +03:30
solvers timers modefied 2023-03-16 06:49:33 -07:00
src documentation added, sample in property.hpp 2023-03-23 22:53:35 +03:30
tutorials Update ReadMe.md 2023-03-17 02:06:00 +03:30
utilities runtime dynamic link library and geometryPhasicFlow modification 2023-02-25 05:15:17 -08:00
.gitignore doc2 2023-03-24 00:15:30 +03:30
CMakeLists.txt Added the DEMSystems to the CMakeLists.txt file 2023-01-30 14:45:05 +01:00
LICENSE build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30
README.md Update README.md 2022-11-19 09:01:11 +03:30
phasicFlowConfig.H.in change of selctors, cmake config for float compilation 2022-09-30 11:43:19 +03:30

README.md

PhasicFlow

PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 32M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.

How to build?

You can build PhasicFlow for CPU and GPU executions. Here is a complete step-by-step procedure.

How to use PhasicFlow?

You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.

Supporting packages

  • Kokkos from National Technology & Engineering Solutions of Sandia, LLC (NTESS)
  • CLI11 1.8 from University of Cincinnati.

Extensions in future

parallelization

  • Extending the code for using OpenMPTarget backend to include more GPUs for off-loading the computations.
  • Extending high-level parallelization and implementing space partitioning and load balancing for muilti-GPU computations and running PhasicFlow on distributed memory super-computers