I attempted to output /parallel/timer/usage and (benchmark'(iterate 10)) every 1000 timesteps to see where the issue could be occurring. I also compared the difference between running in batch mode vs. running in the cluster OpenOnDemand GUI. Typically I have been running interactive Fluent in the cluster GUI since it makes it easier to see simulation progress over time. It seems like the GUI has a similar parallel usage time, but higher benchmark values. The following are done after the same number of timesteps (1999)



I am not sure on the meaning of the benchmark values (cpu-time, solver, elapsed) - can you please let me know? I haven't found much documentation on this online.