Massive amount of memory (RAM) required for solve

Hi there,

I am skeptical about the amount of memory required to solve my model.

I have a 3D model consisting of mostly SOLID186 elements (520,936) with some SHELL181 elements (10580), for a total of 2,160,651 nodes. I also have two bonded and one frictionless contacts in the model, which add 45,366 contact elements. It is a Static Structural model.

When I run my model, I receive this error:

 *** ERROR ***                           

 There is not enough memory for the Distributed Sparse Matrix Solver to proceed using the out-of-core memory mode.  The total memory required by all processes = 40993 MB.  The total physical memory that is available on the system = 15435 MB.  Please decrease the model size, or run this model on another system with more physical memory.

Does 40GB of memory seem reasonable for a model of this size?

Section '5.2 Types of Solvers' of the ANSYS manual says that out-of-core mode typically requires around 1 GB per million DOFs, which means that I should only need ~6.5GB (3 DOFs per node). 

Let me know know if attaching my model would help.

Does the excessive memory requirement suggest that there is something wrong with my model? This this amount of required memory reasonable?

I only have two other warnings in my model, but I think that I can ignore them:

1. Element shape checking is currently inactive.  Issue SHPP,ON or SHPP,WARN to reactivate, if desired.  

2. Material number 29 (used by element 575553 ) should normally have at least one MP or one TB type command associated with it.  Output of energy by material may not be available. 

 

Kind regards,

Kai-Yeung

 

Comments

  • peteroznewmanpeteroznewman Member
    edited January 2019

    Kai-Yeung,

    Exceeding 2 million nodes is best to avoid on a structural model. The model I am working on now was just over 1 million nodes and the solver was going to solve it out-of-core, so I stopped that solve and increased the element size to get down to 800,000 nodes and now the solver is running in-core on the Distributed Direct Sparse Solver running on 14 cores on a computer with 192 GB of RAM.

    How many cores are on the computer and how many did you request?

    How much RAM is on the computer?

    Regards,
    Peter

  • kaiyeunglikaiyeungli Member
    edited January 2019

    Hi Peter,

    Thanks for your feedback.

    The virtual machine I used has 16 cores and 32GB of RAM.

    Because there is contact loading, I would rather not make the mesh coarser, if possible.

    Do you mind having a look at my model when you get the chance? It is a model of a physical pavement test setup. A large container/bin is filled with a 3-layer pavement, which is subjected to contact loading on the top surface. There is a foreign object buried inside the pavement. There is a bonded contact between the outer pavement surface and the bin, and between the pavement and the foreign object buried inside it. The simulated vertical wheel load is applied through a frictionless contact. 

    I attached the model to this post.

    If 40GB is indeed the memory required, then I will have to either increase element size or request more RAM on the virtual machine.

    Kind regards,

    Kai-Yeung

  • peteroznewmanpeteroznewman Member
    edited January 2019

    Kai-Yeung,

    The 40 GB is correct and required for this model to solve using a lot of disk access (out-of-core), which could be slow. 
    It really needs 293 GB of RAM to solve without a lot of disk access (in-core). Cut the model size by 1/4 as described below.

     DISTRIBUTED SPARSE MATRIX DIRECT SOLVER.
      Number of equations =    
    6483831,    Maximum wavefront =    522

      Local memory allocated for solver              =      3.442 GB
      Local memory required for in-core solution     =     18.836 GB
      Local memory required for out-of-core solution =      2.675 GB

      Total memory allocated for solver              =     52.397 GB
      Total memory required for in-core solution     =    293.601 GB
      Total memory required for out-of-core solution =     40.414 GB

     *** WARNING ***                         CP =     188.090   TIME= 23:47:03
     The Distributed Sparse Matrix Solver is currently running in the       
     out-of-core memory mode.  This memory mode may provide significantly   
     worse performance compared to the in-core memory mode, depending on    
     the amount of available system memory and I/O speed.  Please monitor   
     the solver performance to ensure that the large amount of I/O to the   
     solver files does not create a bottleneck for performance.             

    It seems that this model has two planes of symmetry. You should slice the model through the two planes through the center of the model and solve a 1/4 of the model using symmetry.

    Regards,
    Peter

  • kaiyeunglikaiyeungli Member
    edited January 2019

    Hi Peter,

    Thanks for looking at my model. So the 40GB memory required is correct.

    It is correct that my model currently has two planes of symmetry because the load is applied in the middle of the pavement, directly on top of the buried object. However, I also want to see the effect of offsetting the load from the center of the pavement, in which case the model won't be symmetrical.

    Can you please explain the difference between the two sets of values below? How did you obtain this information? From 'Solution Information'?

    Kind regards,

    Kai-Yeung

  • peteroznewmanpeteroznewman Member
    edited January 2019

    Hi Kai-Yeung,

    Yes, if you look in the Solution Information folder after the Solve has started (or finished), these lines will be listed in the Solution Output setting. I don't know what Local memory means, I only pay attention to Total memory which has a direct bearing on whether the model will run in-core (faster) or out-of-core (slower).

    Regards,
    Peter

  • kaiyeunglikaiyeungli Member
    edited January 2019

    Hi Peter,

    I see.

    One last question: do you know why there are why two sets of numbers for the same information? i.e. there are two numbers for 'Local memory allocated for solver': 3.442 GB and 52.397 GB.

    Kind regards,

    Kai-Yeung

  • peteroznewmanpeteroznewman Member
    edited January 2019

    I think the second number is for the next line: Total memory allocated to solver.

  • kaiyeunglikaiyeungli Member
    edited January 2019

    Ah right, sorry, I misread it somehow. My mistake.

     

    Thanks for your time Peter, I appreciate it.

  • peteroznewmanpeteroznewman Member
    edited January 2019

    You can close this discussion by marking a post with Is Solution.

  • sk_cheahsk_cheah Member
    edited January 2019

    Hi Kai Yeung,

    I would suggest you try Slice-and-Dice mesh body with shared topology in SpaceClaim to have only fine mesh for a small volume and have coarse mesh for the rest of the model. Also see if you can do match control meshing of the shell to the adjacent solid.

    On the aside, please turn large-deflection on and change your loading from load controlled to displacement controlled.


    Kind regards,
    Jason

  • Goutham18Goutham18 Member
    edited May 17

    hello Mr.peterozewman 

    i want to know how to determine that my laptop can run these many number of nodes , is that possible? please give me more info on this as i have an issue where my sover says i am short of 18MB virtual memory. I am running Ansys 2019 R2 on Acer predator helios 300 ,16GB RAM 500GB SSD with i7 processor.

    thank you in advance 

  • peteroznewmanpeteroznewman Member
    edited May 18

    1) Use the Direct Solver.

    2) Look at the Solution Output to find the table of memory required for solution.

    Insert a screen snapshot of that table as was done above in this thread.

Sign In or Register to comment.