peteroznewman
Subscriber

I recommend you get a copy of their models to do benchmarking on different PC hardware configurations.

I once did a lot of benchmarking to decide if I should invest in a GPU or not.  I gathered 9 different input files from models of various sizes. If the researchers are using Workbench/Mechanical, there is a Solver Files directory where you will find the ds.dat file.  You can get a collection of those and give them unique names.

Some models will run for minutes and others for hours. The elapsed time is roughly time_per_iteration*number_of_iterations for a nonlinear static structural model.

Both of these factors depend on details of the model. One way to reduce variation between models during benchmarking is to run all models for the same number of iterations, say 5, then stop the solver regardless of how many iterations the model needed to get to the end time of the simulation. There is a command to force this to happen.

Use a batch file to call the solver using a command line for each of the 9 input files and run each one for 5 iterations.  The 9 output files will have the elasped time recorded at the end of the file.

This makes it very efficient to test different hardware configurations in a standardized test and accurately report the impact of RAM, cores, GPU, clock speed, SSD vs HDD and other hardware specs on the elapsed times.

There are solver parameters that will affect solution time that are independent of the model. For example, the user can request the Direct (Sparse) solver or the Iterative (PCG) solver for a model that supports both (some models will only run on Direct solver).

Another solver parameter is the choice of Distributed Ansys vs Shared Memory Parallel.  Most models run faster on the Distributed Ansys solver.

Assuming you have HPC licenses, the number of cores can be adjusted. More cores does not automatically reduce the elapsed time. If a user has an 8 core machine, the elapsed time may be shorter if they request solving on 7 cores instead of 8 cores.

When I tested the GPUs available years ago, adding a GPU did not automatically reduce the elapsed time. It sometimes increased the elaspsed time, especially when adding the GPU required removing a core to keep the HPC license usage constant.

All the above means that there are 8+ ways to run each model on the same hardware configuration!

I assume the researchers know not to use a network file share for scratch space while solving. Nothing slows down solving more that having the temporary files on a network file share. I hope you have taught them to always have a local drive on their computer for solving the models and only archive models on the network file share. Ansys Workbench allows you to configure local scratch space so they can save the model on the network file share, but it will use the local scratch space while solving.