*.scl file is not updated, participants are not coupled when running FSI on HPC

JirongJirong Member Posts: 23

Hi All,

I'm trying to run a FSI problem on HPC via slurm script. The two participants (ansys mechanical + FLUENT) seems not being coupled together, *scl file stopped updating after the Setup Validation. Although the two participants were generating some results, the results were solved separately and not coupled.

I know that ANSYS does not officially support slurm. But I see many people shared their successful experiences, and our school only uses slurm. So I would like to see if anyone has suggestions about this. I would appreciated it very much.




Best Answer


  • SteveSteve Posts: 153Forum Coordinator

    Hi Jirong,

    When submitting a System Coupling job to a job scheduler (System Coupling supports Slurm), you'll need to add the command PartitionParticipants to the .py launch script. The details are discussed more here: https://ansyshelp.ansys.com/account/secured?returnurl=/Views/Secured/corp/v211/en/sysc_ug/sysc_userinterfaces_advtasks_parallel.html

    Please see this tutorial for how to set up the .py launch script. I recommend using the method in this tutorial with .scp files instead of the .scl file method.

    If you post your .py script and Slurm script I might be able to provide more suggestions.

  • JirongJirong Posts: 42Member

    Hi Steve,

    Thanks for your information! And I'm glad that system coupling supports Slurm, that's a great news for me. But I am using ANSYS via our the high performance computer in my university. I don't have an account to open the links you shared. Is there any other way to look those links?

    BTW, under my FSI file directory, I created those files with the guidance from System Coupling Users' Guide (Workflows for system coupling using the Command Line). I have *.dat file for ANSYS Mechanical,*.cas and *.jou (journal file) for FLUENT, and *.sci file for system coupling. I didn't notice any python script needed, so I don't have a .py script. But I have my slurm script pasted here. Here is my slurm script. Any suggestions would be nice.

    #SBATCH --job-name   ANSYS_FSI
    #SBATCH --time     02:00:00     # Walltime
    #SBATCH --ntasks    3
    #SBATCH --mem-per-cpu  16gb        # Memory per CPU
    #SBATCH -o s2.out        # stdout
    #SBATCH -e s2.err        # stderr
    #SBATCH --hint     nomultithread   # No hyperthreading
    cd /panfs/roc/groups/14/tranquil/li000096/ansys/dhp/Slurm/New_large
    module load ansys/20.1
    export SLURM_EXCLUSIVE="" # don't share CPUs
    echo "CPUs: Coupler:1 Struct:$MECHANICAL_CPUS Fluid:$FLUID_CPUS"
    srun -N1 -n1 /bin/hostname | sort > my_node_list
    $ANSYSDIR/aisol/.workbench -cmd ansys.services.systemcoupling.exe -inputFile coupler.sci &
    # Wait till $SERVERFILE is created
    while [[ ! -f "$SERVERFILE" ]] ; do
      sleep 1 # waiting for SC to start
    sleep 1
    # Parse the data in $SERVERFILE
    read hostport
    read count
    read ansys_sol
    read tmp1
    read fluent_sol
    read tmp2
    set `echo $hostport | sed 's/@/ /'`
    echo $1 > out.port
    echo $2 > out.host
    echo $ansys_sol > out.ansys
    echo $fluent_sol > out.fluent
    read host < out.host
    read port < out.port
    read ansys_sol < out.ansys
    read fluent_sol < out.fluent
    echo "Port number: $port"
    echo "Host name: $host"
    echo "Fluent name: $fluent_sol"
    echo "Mechanical name: $ansys_sol"
    fluent 3ddp -g -t$FLUID_CPUS -ssh -mpi=intel -scport=$port -schost=$host -scname="$fluent_sol" -cnf=my_node_list -i fluidFlow.jou > fluent.out || scancel $SLURM_JOBID &
    sleep 2
    # Run Ansys
    ansys201 -b -mpi ibmmpi -np $MECHANICAL_CPUS -scport $port -schost $host -scname "$ansys_sol" -cnf=my_node_list -i structural.dat > struct.out || scancel $SLURM_JOBID &



Sign In or Register to comment.