UDF Issue with High Performance Computer (HPC)


I am currently solving a sliding mesh problem of a wind turbine using an interpreted UDF for zone motion in Fluent V19.

When I start Fluent and run my journal file on my desktop PC (single processor 8 core machine) the run starts fine and the results of the blade lift and drag monitors are calculated correctly.

To be more time efficient I’m trying to run the same run on the HPC but the results are coming out wildly different and incorrect.  In the journal file to read in the UDF file I have written –

; Read in the user defined function for blade sliding mesh interface

/define/user-defined/interpreted-functions "4_Blade_40_Zone_Motion_900_100_V19.c" "cpp" 10000 yes

On the HPC machine I am using two nodes each with 16 cores. This is my first attempt at running a model on the HPC and was just wondering if there is something else that I need to be aware of/add into the journal file to ensure the UDF is initialized correctly across the multiple nodes? I’m not sure and can't think of anything else that could cause the issue.

Any information would be appreciated


  • KremellaKremella Admin
    edited September 2020

    What differences are you seeing? Is your simulation fully converged?

    Are you able to get similar convergence on the two models?

    Could you please elaborate on the differences?



  • Hi Karthik,

    Thanks for the reply. The model I am running on my PC is using an interpreted UDF. The model has 4 blades and after around 10 rotations taking account of the blade/wake interactions the lift and drag monitors become periodic and convergence is reached after around 20 rotations.

    For the same model run on the HPC the lift and drag monitors don't become periodic after 30 rotations and no where near convergence and the monitors appear to have little pattern. Which suggests the UDF isn't working possibly.

    Would using a complied UDF help in this instance when running on the HPC in parallel?

  • RobRob UKForum Coordinator

    What happens if you take the case & data from the local machine and continue the run on the cluster?

  • Hi Rob,

    At the moment I am creating the case file fully after the model has been setup on my local machine then transferring it onto the HPC. Initializing it etc. from the journal file.

    The same case file runs on my local machine Ok but I haven't tried running it for more iterations on the HPC I will give that a go.

  • To try and narrow down the problem I have run a model on created on my local machine (a windows machine), on my local machine then copied the case file onto on the HPC to rerun the model. This time the model doesn't have a UDF and I've assigned a sliding mesh interface between each of the blades and rotor with constant angular velocity but the same issues persists. I have checked the error and the .out files but there are no messages to give an indication.

    When I look at the velocity contour plots it does look like the interfaces are not solving in the same manner on the HPC which is a Linux system. Are there any additional steps that I need to take on a Linux machine in the problem setup?

  • RobRob UKForum Coordinator
    edited October 2020

    No, the files should pass straight over and I've not noticed any effects in the builds over the last several years. The only issue is whitespace characters when going from DOS to LINUX but the case & data files don't suffer from this.

    Have you used any spaces/odd characters in any of the labels? Please post some images.

  • Hi,

    My initial models were running my final problem mesh but to save time I'm running a coarse mesh on my local machine and HPC to check that the results are the same even though they are not realistic. An example of a drag monitor is shown in figure 1 below just to show the differences for the same system setup. To remove whitespace characters I've edited my journal and job script file in Notepad++ to convert to UNIX. My labels don't include any spaces or special characters and can't see anything to my knowledge that I am doing wrong in the setup. I'm exporting the case file from my local machine with Fluent in Serial mode. I've included screen shots of my .out file, journal file and job script to see if this helps in any way. My local machine is running V19.4 but the HPC is V19.1 I'm not sure if this makes a difference? The only thing I did notice on the .out file was a warning for atomic number properties missing, but I'm not sure if this is related

    figure 1

    out file

    Journal File

    Job Script

  • Hello,

    Yes, it is expected that the same case could behave somewhat differently when run on two different versions. The Fluent code undergoes many changes related to both models and solver. The differences could be significant if the Fluent code has undergone many such iterative changes. It is quite possible that this is what you might be seeing in your case. If you have access, I'd recommend that you run the same case of two different machines using the same version of Fluent and verify this.

    Thank you.


Sign In or Register to comment.