Ansys Products

Ansys Products

LS-DYNA Slow Across HPC Nodes

Tagged: , , , ,

    • LukePollock
      Subscriber

      Hey all,

      I've been running some fluid-structural interactions in LS-DYNA using the CESE module on the Gadi HPC in Australia. The system uses openMPI 4.1.1 and MPP Hybrid LS-DYNA 13.0.0. I've been having massive slow down issues when trying to run my simulation across multiple HPC nodes. If I run my problem on 48 cores (1 node) it begins virtually instantaneously whereas if I run it on more than 48 cores (multiple nodes) the start time often exceeds the wall time. The decomposition proceeds well but everything seems to grind to a halt after Phase 3. I've spoken to the HPC technical support team and they're not sure what the problem is, CPU usage is around 100% but there is very little communication between cluster nodes. I'm not sure if there's potentially an incompatibility between Dyna 13.0.0 and openMPI 4.1.1?

      Any help would be really appreciated!

      Thanks

    • tslavik
      Ansys Employee
      The RR13.0.0 Hybrid release was built against Intel MPI 2018. Please send the information in the d3hsp banner which shows us how your executable was built. For example, it should look something like this:


Viewing 1 reply thread
  • You must be logged in to reply to this topic.