LS Dyna

LS Dyna

LS-Dyna Hybrid solver tuning help

TAGGED: , ,

    • smitchuson
      Subscriber

      I'm working with a hybrid solver problem that I'm wondering if I couldn't tune better for my environment.

      I have 18 nodes with Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz and 192 gb of ram each.

      I currently have the job in a running state with the following line

      /cm/shared/apps/spack/opt/spack/linux-centos8-icelake/gcc-8.4.1/openmpi-4.1.1-tb3xalwpw7zzdpnhv7x7hfc23akhimpt/bin/mpirun --mca btl '^openib' -np 7 lsdyna "i=Master.key memory1=2000m ncpu=-8"

      This is fully working and I'm able to generate d3plot files but I'd like to know if there are some extra flags I could add to make the process run better or anything else I could try.

      Things I have tried: "-np 56" "-np 28" "-np 14" and removing the memory listing

    • tslavik
      Ansys Employee
      Please do a scaling study with pure MPP LS-DYNA - if it scales well above 96 cores then you might consider Hybrid MPP. You will not see speed up in runs between Pure MPP and Hybrid MPP for lower core counts. Hybrid MPP will scale better when the total core count is larger than 96 cores. You will really see the advatange when you use 100s of cores, where Pure MPP will plateau due to communication overhead. If you write back please show the MPP scaling results and show the banner in the d3hsp file. Also, tell us how many deformable elements comprise the model and if you are using something other than the standard transient dynamics solver (explicit integrator) ... for example, electromagnetics, implicit, CESE, incompressible CFD, etc.
    • smitchuson
      Subscriber
      If I want to use pure mpp how do I need to handle memory per core? If I have 128gb of ram and 56 cores available per node then I need to do
      -np 56 memory= 2000m

      Also from the researcher:
      30 million of standard shell and solid deformable elements. I am not using anything special beyond the above.
      Thanks
    • tslavik
      Ansys Employee
      It helps to know which version of LS-DYNA you're using (and whether double or single precision) - the d3hsp banner shows all the necessary info. Also, please let us know if you're using only the explicit dynamics solver or something else ... implicit, thermal, multi-physics, etc. It will be difficult to provide further guidance without this information.
      You can incrementally increase the memory until the requirement is satisfied.Or, you can simply allocate about 80% of the available resources, leaving some for the OS and other necessary background utilities, however, the unused memory is not released, and remains unavailable while LS-DYNA is running.
      LS-DYNA does have a dynamic memory allocation feature. I think it can be used safely in a test case to find memory requirements, but do not recommend it for your production runs - for production runs you should specify the memory on the command line and disable the auto-memory feature.
      You'll know if auto-memory is incompatible with the features in your model, as the simulation will terminate abnormally shortly after initialization.The feature is enabled by setting the environment variable LSTC_MEMORY to "auto".As an example, in a Linux C shell, it is done like this:
      setenv LSTC_MEMORY auto
      You'll know its done correctly when the following message appears below the banner at the top of the d3hsp file:
      Memory option: AUTO selected
      Please make a short run (one cycle) with auto-memory, then look for the memory recommendations in the d3hsp file . Use that recommendation to set memory on the command line with the auto-memory feature DISABLED. When you disable it for production runs you will no longer see "AUTO selected" in the d3hsp file.

Viewing 3 reply threads
  • You must be logged in to reply to this topic.