Fluids

Fluids

Regarding the difference in counting number of faces on a thread in serial and parallel fluent

    • ch15d410
      Subscriber

      Dear Ansys 


      I want to find the average temperature on a boundary and write source term as a function of average temperature, using DEFINE_SOURCE


      Now, to find the average temperature of all the faces in that boundary thread, i chose to find number of faces on that boundary, which runs in a face loop as given below.


      The UDF below is just to check in the reports, if i am getting the right number of faces.


      #include "udf.h"


      DEFINE_SOURCE(t1,c,t,dS,eqn)


      {


      real source;


      int nfaces=0;


      int zon_ID = 7 ; /* hotside */


      Thread *t_a= Lookup_Thread(Get_Domain(1),zon_ID); /* hotside */


      face_t f_a;  /* hotside */


      begin_f_loop(f_a,t_a)


      nfaces += 1;


      end_f_loop(f_a,t_a)


      source = nfaces ;


      dS = 0;


      return source;


      }


      This code compiles without any errors/warnings


      According to my hand calculations, if total number of faces are 200 and vol of the interior where source term is applied is 10, then i should get 2000 Watts as net in total heat transfer flux reports according to the UDF written above.


      Now the problem is that, this code calculates right number of nodes in serial window and gives some different number in parallel window.


      If i include #if !RP_HOST at the beginning of the code, then while compiling this code, i get the warning that 't1' should return a value, but no errors. Also, it gives me wrong number of faces, compared to my hand calculations.


      Suggest me how to make this code count the right number of faces in parallel window as well.


       


      Thanks in advance

    • DrAmine
      Ansys Employee
      Refer to the parallelisation part in the customization manual. This is straightforward and if you still have issues you can attend some training. You need to do global reduction and count on principal face.
    • Rob
      Ansys Employee

      To confirm, the surface zone is a boundary thread and not the interior associated with the volume?  This doesn't remove the need for a parallel UDF. 

    • DrAmine
      Ansys Employee
      And better to find the average in separate adjust udf as source udf does already loop over the cells. So start by define adjust or use the get report definition api for a defined report definition.
    • ch15d410
      Subscriber

      Yes Sir, the surface boundary thread is not included in the volume of the interior where source term is applied.


      But, i am not sure why parallel and serial problem exists


      Thanks

    • ch15d410
      Subscriber

      Thanks for your suggestions sir, I'll get back after giving another try.


       

    • ch15d410
      Subscriber

      Dear Sir


      Now i wish to calculate area weighted average instead of average, since both are same for an unstructured mesh. I have the following code which compiles without any errors but with warning that 't1' should return a value if !RP_HOST is mentioned in DEFINE_SOURCE part.  


      If i start the calculation, i get this error


      Node 0: Process 26112: Received signal SIGSEGV.


       


      ==============================================================================


       


      ==============================================================================


       


      Node 1: Process 35848: Received signal SIGSEGV.


       


      ==============================================================================


       


      ==============================================================================


       


      Node 2: Process 32812: Received signal SIGSEGV.


       


      ==============================================================================


       


      ==============================================================================


       


      Node 3: Process 10080: Received signal SIGSEGV.


       


      ==============================================================================


      MPI Application rank 4 exited before MPI_Finalize() with status 2 .


       


      Please look at my UDF and suggest something Sir


       


      #include "udf.h"


      static int zon_ID = 7 ; /* hotside */


      static int te_zon_ID = 12 ; /* hotside */


      DEFINE_ADJUST(adjust, domain)


      {


      real area, A[ND_ND], tot_area=0;


      face_t f_a;  /* hotside */


      Thread *t_a= Lookup_Thread(Get_Domain(1),zon_ID); /* hotside */


      cell_t c_te;


      Thread *t_te= Lookup_Thread(Get_Domain(1),te_zon_ID); /* in PN_TE */


      begin_f_loop(f_a,t_a)


      {


      F_AREA(A,f_a,t_a);


      area = NV_MAG(A) ;


      tot_area += area ;


      }


      end_f_loop(f_a,t_a)


      tot_area= PRF_GRSUM1(tot_area);


      begin_c_loop_int(c_te, t_te)


      {


      C_UDMI(c_te, t_te, 1) = tot_area;


      }


      end_c_loop_int(c_te, t_te) 


      }


      DEFINE_SOURCE(t1,c,t,dS,eqn)


      {


      real source;


      real tot_area;


      real w = 0.002;  /* thickness in m */


      real tf_area = 0.0016; /* total area of interface in m2 */


      real vol = tf_area*w ;


      tot_area = C_UDMI(c, t, 1) ;


      source = tot_area/vol ;


      dS = 0;


      return source;


      }


       


      My idea is to write a source for all the cells in the interior as a function of average face temperature, for which I am calculating area first to check if the area weighted average can be computed properly.


      Thanks

    • DrAmine
      Ansys Employee
      Again there are some examples in the docu where you might learn how to get it parallelized. If your supervisor has a customer portal access he can give more examples. Again parallelization has to be done in the adjust and not in the source udf
    • ch15d410
      Subscriber

       #include "udf.h"


      static int zon_ID = 7 ; /* hotside */


      int nfaces;


      DEFINE_ADJUST(adjust, domain)


      {


      # if !RP_HOST


      face_t f_a;  /* hotside */


      Thread *t_a= Lookup_Thread(Get_Domain(1),zon_ID); /* hotside */


      nfaces=0;


      begin_f_loop(f_a,t_a)


      {


      if (PRINCIPAL_FACE_P(f_a,t_a)) /* Always TRUE in serial version */


      nfaces+=1;


      }


      end_f_loop(f_a,t_a)


      # if RP_NODE


      nfaces= PRF_GRSUM1(nfaces);


      # endif


      #endif


      }


       


      DEFINE_SOURCE(s1,c,t,dS,eqn)


      {


      real source;


      real w = 0.002;  /* thickness in m */


      real tf_area = 0.0016; /* total area of interface in m2 */


      real vol = tf_area*w ;


      source = nfaces/vol;


      dS = 0;


      return source;


      }


       


      Dear Sir, With this UDF, I dont get MPI exit error, But it neither gives me the exact count of the faces.


      I am doubting that the value of nfaces that is calculated in DEFINE_ADJUST is not getting accessed in DEFINE_SOURCE. Is this true?


      Please suggest me what to do


      Thanks

    • DrAmine
      Ansys Employee
      The source expression you are using is weird. What do you want to do? I would rather try accessing the face of the wall next to the cell and use that.

      Regarding the code try using static variable.

      Again we do not debug UDF.
    • DrAmine
      Ansys Employee
      For sanity do the sync with host so that you have it on host too (optional) and actually define adjust is not ideal here as I understand the number of faces will remain constant so either a macro which is executed on demand or after case load or manually providing the number of faces.
Viewing 10 reply threads
  • You must be logged in to reply to this topic.