September 6, 2019 at 5:23 pmRacielSubscriber
I am working in the structural analysis of heat exchangers at very high temperature conditions. Thermal expansion is an important issue in this kind of device. Then, thermal load and internal pressure is considered. I have a relative big domain with around 3 millions of nodes. What amount of memory and CPU is recomendable to achieve my solution? Where can I set the BCSOPTION or DSPOPTION command?
Thank you in advance.
September 7, 2019 at 10:58 ampeteroznewmanSubscriber
Are you working in Workbench and Mechanical or are you using APDL? What release of ANSYS are you using?
Here are old instructions for setting the number of cores and turning on Distributed ANSYS in Mechanical.
In the new user interface of 2019 R2, it is simpler. They are right on the ribbon.
If you use the settings above, there is almost never a need to use the DSPOPTION in Mechanical.
Distributed solves are almost always faster than Shared Memory solves, so you won't need BCSOPTION.
The only time I have used DSPOPTION is to force the solver to run in-core when its own logic chose out-of-core, but I knew it would fit in-core.
I have a 16 core computer and did some benchmarking on a range of different models, solving them at 2, 4, 8, and 16 cores. There are diminishing returns going from 8 to 16 cores for Structural solutions. CFD models continue to scale well. I have done no testing on Thermal models. I also have two licenses for solving, so I sometimes run two jobs in parallel at 8 cores each. The two jobs running in parallel on 8 cores each finish in nearly half the time compared with running the two jobs sequentially on 16 cores. I also found using 16 cores results in a longer solve time than using 15 cores. It is best to not use every core on the computer for the solver.
On the topic of memory, I put as much RAM in the computer as Windows 7 would support, 192 GB, so most of my models run in-core. For your models, you can find out if they are running in-core or out-of-core by solving the model using the Direct Sparse solver (not the Iterative PCG solver) by configuring that under Analysis Settings. After the solver has run, in Mechanical, you click on the Solution Information folder and set the details for the Solver Output in order to have the solve.out file shown in the main window. Ctrl-F to find memory and click Next till you see something like the text below.
DISTRIBUTED SPARSE MATRIX DIRECT SOLVER.
Number of equations = 9261, Maximum wavefront = 168
Local memory allocated for solver = 12.157 MB
Local memory required for in-core solution = 11.659 MB
Local memory required for out-of-core solution = 5.591 MB
Total memory allocated for solver = 43.594 MB
Total memory required for in-core solution = 41.832 MB
Total memory required for out-of-core solution = 20.945 MB
*** NOTE *** CP = 1.859 TIME= 06:52:00
The Distributed Sparse Matrix Solver is currently running in the
in-core memory mode. This memory mode uses the most amount of memory
in order to avoid using the hard drive as much as possible, which most
often results in the fastest solution time. This mode is recommended
if enough physical memory is present to accommodate all of the solver
See where it says "Total memory allocated for solver" find that number in your output to get an idea of the minimum amount of RAM you should have, because you want your models to run in-core.
If I get the message that the solver is running out-of-core, I go back and change the mesh density to use less nodes until the problem will run in-core. Sometimes I can't get there so I might use the Iterative PCG solver, which requires less memory. But even with a small amount of installed RAM, ANSYS can still provide a solution, it will just take more time.
I maintain accuracy by having closely spaced nodes in areas of high stress (or temperature) gradient, and use larger elements where the gradient is low.
September 13, 2019 at 6:35 pmRacielSubscriber
I am working in Mechanical with the ANSYS version 18.1. I have verified that the solver automatically calculate in-core when it is possible, and out-of-core if the required memory with in-core mode is higher than memory system. In that cases, I have had an error refer to the low space in disk, but the space is sufficient.
Anyway, I am trying all geometric domains my study have around 2.5 millions of nodes to can solve with in-core mode. I have 128 GB of memory and I have seen that per each GB of memory can be solved models with around 20000 nodes.
- You must be logged in to reply to this topic.
Earth Rescue – An Ansys Online Series
The climate crisis is here. But so is the human ingenuity to fight it. Earth Rescue reveals what visionary companies are doing today to engineer radical new ideas in the fight against climate change. Click here to watch the first episode.
Subscribe to the Ansys Blog to get great new content about the power of simulation delivered right to your email on a weekly basis. With content from Ansys experts, partners and customers you will learn about product development advances, thought leadership and trends and tips to better use Ansys tools. Sign up here.
- Saving & sharing of Working project files in .wbpz format
- Understanding Force Convergence Solution Output
- An Unknown error occurred during solution. Check the Solver Output…..
- Solver Pivot Warning in Beam Element Model
- Colors and Mesh Display
- whether have the difference between using contact and target bodies
- How to calculate the residual stress on a coating by Vickers indentation?
- What is the difference between bonded contact region and fixed joint
- The solver engine was unable to converge on a solution for the nonlinear problem as constrained.
- User manual
© 2023 Copyright ANSYS, Inc. All rights reserved.