Fluids

Fluids

Using distributed nodes for Structural solver of FSI

    • BiomedModeller
      Subscriber
      Hi all,nI am trying to perform a Multiphysics analysis (FSI) on a cluster using the GUI using CFX. Currently, I can make the structural simulation run in parallel mode, using only one node and N partitions (using the -np command). However, I have hard time using the distributed nodes (herein 3 Nodes) to run the ANSYS APDL in parallel mode (I can perform the simulation of the fluid part in distributed parallel mode using multiple nodes but can't do the same for the structural part).nCan anyone tell me how can I use multiple nodes each with N partitions (distributed nodes) for the structural simulation, please?nA quick follow up question:nMy fluid and structure parts only contain 2M and 300K elements, respectively. Currently, I am using 120 partitions (3 Nodes of 40 cpus) for the fluid and 40 partitions (as explained above, 1 Node) for the structural side. Why is my Multiphysics simulation still very slow?nI decreased the number of partitions for the fluid flow simulation but its effect on the run time is not significant.nThanks in advance.n
    • Rob
      Ansys Employee
      or possibly a Mech question?n
    • peteroznewman
      Subscriber
      nOn a compute cluster containing 3 nodes, each with 40 cores, the speed of communication between the cores of a single node is substantially higher that the speed of communication between nodes. nThe best practice for allocating cores on a single node is to assign a maximum of n-1 cores to the problem. This leaves one core to manage the processor while the n-1 cores are assigned a piece of the model.nI recommend you do performance testing. It is not guaranteed that running with more cores will reduce the elapsed time! There is overhead in communicating intermediate computation results between cores. If you assign too many cores, the increase in overhead is more than the decrease in time per core. I have done performance testing on structural models and seen the elapsed time go down as I added cores, then start to increase. I haven't done that for fluids solvers, they tend to scale better than structural models, meaning that the elapsed time continues to decrease as cores are added.nThe size of a structural problem is usually stated in node count or number of equations, not number of elements, but 300K is not a large number of elements. I would not be surprised if the structural model alone solved in less elapsed time on 20 cores than it does on 40 cores.nIf you are following the FSI best practice of building your model up piece-by-piece, then you have a structural only model and a fluid only model that each can run alone. Setup the structural model so that it only runs for a limited time. If it was a transient that ran for 60 seconds, simulate 0.6 seconds. If it was a statics model that needed 100 iterations to converge, insert the NCNV,,10 command into the model. This forces the solver to stop computing after only 10 iterations. Run this structural model with a much shortened elapsed time using the following number of cores: 4, 8, 16, 32 and reply with the elapsed time for each run.nDo the same on the fluids only problem and set the model up so that stops solving after a fixed number of iterations that will take only a few minutes. I don't know how you configure the nodes and cores on the CFX solver, but at least try running on 1, 2 and 3 nodes and reply with the elapsed time for each run.n
Viewing 2 reply threads
  • You must be logged in to reply to this topic.