General Mechanical

General Mechanical

GPU Acceleration in Mechanical

    • Palbo
      Subscriber

      I'm curious why only such specific graphics cards, like the NVIDIA Teslas, are supported for GPU acceleration in Ansys Mechanical computations. Secondly, I'm using a Quadro RTX 4000, which is tested as a display card for Ansys and of course does a lovely job of keeping the display snappy. I would love to be able to use it to assist in solving simulations as well, and I was wondering if there's still any way to do that (most of what I'm doing at the moment is in static structural, and I can provide more details about the system I'm running etc. if necessary; I just don't know what would be useful to know, as I'm rather new to Ansys). I don't mind if incorporating a GPU like mine wouldn't give a significant performance boost, but I would like to find out how it can be done if it's at all possible. When I check the box to enable NVIDIA GPU acceleration in the advanced solver settings, the solution returns an error once the mathematical model is built. The parts that seem relevant are the following:


      No recommended GPU devices have been detected on machine               
       DESKTOP-5VIE40K.  Only Tesla-series or Quadro                          
       P5000/P6000/GP100/GV100/RTX6000/RTX8000 GPU devices are recommended at 
       this release.  For optimal performance, install a recommended GPU      
       device in this machine.  If you wish to use an alternative GPU device, 
       please review the recommendations in the section titled "Requirements  
       for the GPU Accelerator in Mechanical APDL" in the Installation Guide  
       for your platform. 


      As well as,


      Number of GPUs requested                :    1
      GPU Acceleration: NVIDIA Library Requested but not Enabled
      GPU Device with ID =  0 is: Quadro RTX 4000
      GPU Driver Version: 10.10
      CUDA Version: 10.0


      Now, the first bit made it seem like I certainly can run computations with non-recommended GPUs by following a certain procedure, so I've gone to the guide it mentioned, and found the following:


      To utilize a NVIDIA GPU device that is not on the recommended list of cards, set the following environment variable:


      ANSGPU_OVERRIDE=1


      This is followed by a little warning saying that doing this with new, powerful graphics cards that haven't been tested is just fine, but doing it with weaker cards may actually slow performance. Again, I don't mind this (so long as it's reversible , so I want to try it out for fun. I do want to be sure that I set the environment variable correctly, as this is something I don't do often besides adding the odd thing or two to the system path. So, I just create a new system environment variable called ANSGPU_OVERRIDE and give it the value 1? Or is it something to be done in the "Additional Command Line Arguments section" of the mehcanical solving tab?


      Finally, after the environment variable override, should I still expect to encounter a snag with the "NVIDIA Library Requested but not Enabled" error that came up in the solver output, or will that be taken care of by the overriding or HPC or something like that?


      If you've bothered to read through all this uninformed musing, thank you very much; I'm grateful for any advice, and have a lovely day!

    • Aniket
      Ansys Employee

      First of all, thanks for your "uninformed musing" I wish more people did this much! 


      Yes, just create a new system environment variable (user environment variable would do too, I guess) called ANSGPU_OVERRIDE and give it the value 1.


      for "NVIDIA Library Requested but not Enabled" I think you will need the Cuda toolkit from NVIDIA.


      https://developer.nvidia.com/cuda-downloads


      And finally to answer why very few GPUs support acceleration, in theory only GPUs which have significant double-precision performance are useful to use for GPU acceleration.


      -Aniket


      How to access Ansys Online Help Document


      How to show full resolution image


      Guidelines on the Student Community


      How to use Google to search within Ansys Student Community


       

    • Palbo
      Subscriber

      Hi, Aniket!


      Thanks for the information; I went and put in the environment variable, and straight away I could use the GPU acceleration setting without raising any errors. What took me a second to reply was that I also decided to do a little ad-hoc testing with a stopwatch and the taskmanager. When I had GPU acceleration off, there was no utilization during solving whatsoever, as expected: no VRAM usage, no CUDA usage, etc. The simulations ran at a normal speed, though I only have 16 GB of RAM at the moment, so when I did larger solutions, I got a lot of disk utilization (we're talking writing 300+ MB per second for about a third of the total time, and reading back at about 2GB per second towards the end). When I enabled GPU acceleration, building the mathematical model looked about identical, but the CPU indeed hovered at less overall utilization, and we got a lot of CUDA usage, showing that the GPU was indeed computing away! Success! Plus, it used a considerable amount of its VRAM, which I think is a reason why there was a lot less disk utilization when I had the GPU enabled. Both tests took about the same amount of time, though I imagine that once I put more RAM in my machine, the disk usage will lower for the CPU-only calculations and the CPU will be faster over all unless you do indeed have a GPU with excellent double-precision flops. For anyone else following my steps, I ended up not needing any additional toolkits or libraries; just the environment variable. This was a good bit of fun!

Viewing 2 reply threads
  • You must be logged in to reply to this topic.