January 11, 2022 at 1:46 pm
Ansys Employee
Hi Judith
Ok, that's interesting that you say it worked when 1 core was assigned to each solver.
runFSI should be a python script, better with extension .py. I haven't tested whether extension .txt will be interpreted as a python script. Likely though if it works with 1 core per solver this is not a problem.
Since it seems to be happy with 1 core each but not more, I think we should look at the job submission method and core assignment. Have you got a SLURM submission script that sets environmental variables? Here is an example you can use for SyC jobs on SLURM:
#!/bin/bash -l
#
# Set slurm options as needed
#
#SBATCH --job-name SYSC
#SBATCH --nodes=2
#SBATCH --partition=ottc02
#SBATCH --ntasks-per-node=32
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
#SBATCH --export=ALL
export AWP_ROOT212=/sw/arch/Centos8/EB_production/2021/software/ANSYS/2021R2/v212
#
export SYSC_ROOT=${AWP_ROOT212}/SystemCoupling
#
# print job start time and Slurm job resources
#
date
echo "SLURM_JOB_ID : "$SLURM_JOB_ID
echo "SLURM_JOB_NODELIST : "$SLURM_JOB_NODELIST
echo "SLURM_JOB_NUM_NODES : "$SLURM_JOB_NUM_NODES
echo "SLURM_NODELIST : "$SLURM_NODELIST
echo "SLURM_NTASKS : "$SLURM_NTASKS
echo "SLURM_TASKS_PER_NODE : "$SLURM_TASKS_PER_NODE
echo "working directory : "$SLURM_SUBMIT_DIR
#
echo "Running System Coupling"
echo "System coupling main execution host is $HOSTNAME"
echo "Current working directory is $PWD"
#echo "ANSYS install root is $AWP_ROOT212"
echo "System coupling root is $SYSC_ROOT"
echo "Run script is $1"
echo
"$SYSC_ROOT/bin/systemcoupling" -R runFSI.txt
you'd have to change job name, nodes, partition and ntasks-per-node.
Paul
runFSI should be a python script, better with extension .py. I haven't tested whether extension .txt will be interpreted as a python script. Likely though if it works with 1 core per solver this is not a problem.
Since it seems to be happy with 1 core each but not more, I think we should look at the job submission method and core assignment. Have you got a SLURM submission script that sets environmental variables? Here is an example you can use for SyC jobs on SLURM:
#!/bin/bash -l
#
# Set slurm options as needed
#
#SBATCH --job-name SYSC
#SBATCH --nodes=2
#SBATCH --partition=ottc02
#SBATCH --ntasks-per-node=32
#SBATCH --output=%x-%j.out
#SBATCH --error=%x-%j.err
#SBATCH --export=ALL
export AWP_ROOT212=/sw/arch/Centos8/EB_production/2021/software/ANSYS/2021R2/v212
#
export SYSC_ROOT=${AWP_ROOT212}/SystemCoupling
#
# print job start time and Slurm job resources
#
date
echo "SLURM_JOB_ID : "$SLURM_JOB_ID
echo "SLURM_JOB_NODELIST : "$SLURM_JOB_NODELIST
echo "SLURM_JOB_NUM_NODES : "$SLURM_JOB_NUM_NODES
echo "SLURM_NODELIST : "$SLURM_NODELIST
echo "SLURM_NTASKS : "$SLURM_NTASKS
echo "SLURM_TASKS_PER_NODE : "$SLURM_TASKS_PER_NODE
echo "working directory : "$SLURM_SUBMIT_DIR
#
echo "Running System Coupling"
echo "System coupling main execution host is $HOSTNAME"
echo "Current working directory is $PWD"
#echo "ANSYS install root is $AWP_ROOT212"
echo "System coupling root is $SYSC_ROOT"
echo "Run script is $1"
echo
"$SYSC_ROOT/bin/systemcoupling" -R runFSI.txt
you'd have to change job name, nodes, partition and ntasks-per-node.
Paul