Fluids

Fluids

GPU solver gains expected

    • Dominic de'Ath
      Subscriber

      Hello, I am trying to make best use of the licences I have available to me so have tried to use the GPU solver as I have seen it can be much more effective than just using more CPU cores. (see Unleashing the Full Power of GPUs for Ansys Fluent). For info, our licence allows up to 4 cores to be used as standard, plus we have a few additional HPC licences shared across a number of modelling teams (hence I want to use my 4 cores most effectively before needing to use the HPC cores).

      However I am not getting the gains i expect. I have run a test simulation of a flow system with species transport, using the coupled solver, double precision, on a model that has ~2,000,000 elements. To run 10 iterations:

      GPU only - 367 seconds

      1 CPU only - 348 seconds

      1 CPU, 1 GPU - 220 seconds

      2 CPU - 150 seconds

      2 CPU, 1 GPU - 93 seconds

      3 CPU - 111 seconds

      (wall clock times, and not the predicted times shown in the monitor window)

      This trend continues with the runs including the GPU being marginally quicker than equivalent CPU only runs (I tested up to 8 cores total). My question is why does the GPU initially slow the run time when only using 1 CPU core, but speed it up when combined with 2 or more CPU cores? The link above shows a single GPU core solving over 8 X faster than 32 cores (so I estimate 256 X faster than a single core if the scaling works like this?) I appreciate this will be hardware dependent, but I would have thought I would see bigger gains than I am seeing above. Also when running with a GPU I barely see any activity on the GPU Cuda monitor, and the temperature remains very low. 

      Hardware info:

      CPU - Core i9-7900X (3.30 GHz - 10 cores, 20 threads. Can't disable HT as company has blocked access to BIOS)

      32 GB RAM @2666 MHz

      GPU - NVIDIA Quadro P4000 (which is on the list of supported cards here Graphics Cards Tested (ansys.com))

       

      Am I expecting too much? Does the GPU solver only help in certain types of models or is there something else I need to turn on? Any help would be appreciated!

    • DrAmine
      Ansys Employee

      Are you using the native GPU Solver? I am asking that as Species Transport is not supported so far in 22R2.

    • Dominic de'Ath
      Subscriber

      Apologies, I should have said before, we are still running on 2019 R3 (although should be updating to 22R2 soon). But yes this is the native solver.

      Is it worth repeating my above test for a flow only model?

    • DrAmine
      Ansys Employee

      There is no native GPU Solver within 2019R3.

      So you need to update first to 22R2 and then try the Native GPU Solver.

       

      In 2019Rx you have the GPU acceleration for coupled flow calculations with the default solver. That will accelerate only systems which requires more 80% on solve step (LE Time on iteration when checking the timer usage for CPU only run).

    • Dominic de'Ath
      Subscriber

      Many thanks, that has helped! 

      What factors influence the proportion of time taken to solve the linear equations (i.e. under what circumstances would the LE wall clock time be >80%)? I have been trying several things today and I am struggling to see the relationships. Most things I am doing does seem to change the time to solve the linear equations, but equally this changes the time to compile the LEs so the percentage stays approximately the same, or if it does change it is never >80%. (trying different meshes, different turbulence models, different boundary conditions etc.)

    • DrAmine
      Ansys Employee

      Bit hard to explain here but that the time really matters if a GPU will scale up compared to CPU (number of cells, partitionion, interconnect so archtitecure, etc..). Again with the native GPU Solver things are completely different as it is genuine solver optimized to run on GPU. I suggest to try that as soon as possible.

Viewing 5 reply threads
  • You must be logged in to reply to this topic.