LS Dyna

LS Dyna

Large processing time

    • Pratik Ganorkar
      Subscriber

      Hi,

      The ball plate impact analysis which I have been using takes very long time to process and the time required to process increases as I refine the plate mesh.

      How can I reduce the processing time for finer mesh ? 

      Please share your knowledge with me.

      Thanking you

      Pratik Ganorkar (pratik2116303@iitgoa.ac.in)

      MTech, Mechanical Engineering, IIT Goa

    • Armin Abedini
      Subscriber

      Hi Pratik,

      If you cannot coarsen your mesh, there are other techniques available to reduce computational time in explicit dynamics analysis. For instance, you can employ mass scaling with the corresponding parameters available under Analysis Settings in Workbench LS-DYNA. 
      In addition, if the plate is much stiffer than the ball, you can assume the plate to behave as a rigid body.
      I recommend you to see the course below for a more detailed description of available techniques to lower computational time:

      Explicit Dynamics Theory - Ansys Innovation Course - YouTube

    • Andreas Koutras
      Ansys Employee

      Hello, 

      Here are some ideas for reducing the runtime of an explicit analysis in LS-DYNA.

      1)    Use mass scaling to increase the explicit critical time step size by adding artificial mass in key locations of the model. The amount of added non-physical mass is controlled by specifying the desired new time step size through DT2MS on *CONTROL_TIMESTEP. The method of DT2MS<0 described in the LS-DYNA Keyword Manual Vol. I is the recommended one. With this method mass is added to only those elements whose time step would otherwise be less than TSSFAC*abs(DT2MS) and therefore the initial time step will not be less than TSSFAC*abs(DT2MS). The amount of added mass is shown in the d3hsp, message, and glstat files. Mass scaling should be used with caution as it alters the inertia properties of the model.

      2)    To improve the efficiency of the model, use the default element formulations (type 1 bricks and type 2 shells), which are reduced integrated elements. Also consider using more efficient contacts, such as contacts with SOFT=0 or SOFT=1 (recommended over SOFT=0).  Consider also replacing two-way contacts with ONE_WAY contacts wherever this is possible. Note that the SOFT=2 and MORTAR contacts can capture more detail than the SOFT=0/1 contact but they are more computationally expensive.

      3)    Increase the number of analysis cores. The efficiency of the SMP (shared memory parallel) solver is limited to just a few cores, maybe 8 cores give or take. On the other hand, the MPP (distributed memory parallel) solver can make use of a large number of cores, but it will be efficient only if there is a sufficient number of elements assigned to each core because of the additional cost that the message passing entails. The rule of thumb for specifying the number of cores in an MPP analysis is to have at least 10,000 elements per core. 

      4)    In MPP, use a higher instruction set from sse2, such as avx2 or avx512, if this is supported by your CPU model. A higher instruction can further reduce the explicit analysis runtime although not dramatically. MPP solvers for avx2 or avx512 are currently available in the Linux versions only.

      5)    Consider using a single-precision solver. Although double precision is always recommended as it is more accurate, single precision is much faster and it is also commonly used. It is ideally recommended that single precision is used only after its results have been validated with a double precision version.

    • Andreas Koutras
      Ansys Employee

      Some further tips are here:

      https://www.dynasupport.com/faq/general/i-have-very-long-run-times.-what-can-i-do

    • Pratik Ganorkar
      Subscriber

      Thank you andreas and armin.

Viewing 4 reply threads
  • You must be logged in to reply to this topic.