Electronics

Electronics

Topics relate to HFSS, Maxwell, SIwave, Icepak, Electronics Enterprise and more

ansysedt in batch using MPI

    • jsarlo
      Subscriber

      I am working with AnsysEM 2022r2.  I am a sys admin working with a researcher.  We are trying to run a batch job with Slurm and wanting to use multiple compute nodes in the cluster.  We have the aedt input file and are trying the following as the execution line of the job script.

      ansysedt -ng -batchsolve -dis -mpi -machinelist list=$hl num=$SLURM_NTASKS ${InputFile}

      When I watch the compute nodes that get assigned, I only see the first one being used.  Nothing ever starts on the second compute node.  The $hl list gets built to something like  list=compute-4-53-ib0:48:48:98%,compute-7-19-ib0:48:48:98%  I have also tried building the list to being individual 1:1 48 times for each compute node (compute-4-53-ib0:1:1:98%,compute-4-53-ib0:1:1:98%...compute-7-19-ib0:1:1:98%, ...)

      Is there something else that needs to be on the command line to use both compute nodes or is there something else that needs to be done?

      Jeff

       

    • randyk
      Ansys Employee

      Hi Jeff, 

      Please consider creating the following script. (job.sh for this example)
      dos2unix   ./job.sh
      chmod +x ./job.sh
      sbatch ./job.sh

      Modify lines 2-3, 12-13, and 39 as needed.
      note1: the value of line 39 "numcores=xx" must match the allocated resource core count.

      job.sh
      #!/bin/bash
      #SBATCH -N 2               # allocate 2 nodes
      #SBATCH -n 32             # 32 tasks total
      #SBATCH -J AnsysEMTest     # sensible name for the job
      #SBATCH -p default           # partition name
      ##SBATCH --mem 0            #allocates all the memory on the node to the job
      ##SBATCH --time 0
      ##SBATCH --mail-user="user@company.com"
      ##SBATCH --mail-type=ALL
       
      # Project Name and setup
      JobName=OptimTee.aedt
      AnalysisSetup=""
       
      # Project location
      JobFolder=$(pwd)
       
      #### Do not modify any items below this line unless requested ####
      InstFolder=/opt/AnsysEM/v222/Linux64
       
      #SLURM
       export ANSYSEM_GENERIC_MPI_WRAPPER=${InstFolder}/schedulers/scripts/utils/slurm_srun_wrapper.sh
       export ANSYSEM_COMMON_PREFIX=${InstFolder}/common
       srun_cmd="srun --overcommit --export=ALL  -n 1 -N 1 --cpu-bind=none --mem-per-cpu=0 --overlap "
       # note: srun '--overlap' option was introduced in SLURM VERSION 20.11. If running older SLURM version, remove the '--overlap' argument.
       export ANSYSEM_TASKS_PER_NODE="${SLURM_TASKS_PER_NODE}"
       
      # Setup Batchoptions
      echo "\$begin 'Config'" > ${JobFolder}/${JobName}.options
      echo "'Desktop/Settings/ProjectOptions/HPCLicenseType'='Pack'" >> ${JobFolder}/${JobName}.options
      echo "'HFSS/RAMLimitPercent'=90" >> ${JobFolder}/${JobName}.options
      echo "'HFSS 3D Layout Design/RAMLimitPercent'=90" >> ${JobFolder}/${JobName}.options
      echo "'HFSS/RemoteSpawnCommand'='scheduler'" >> ${JobFolder}/${JobName}.options
      echo "'HFSS 3D Layout Design/RemoteSpawnCommand'='scheduler'" >> ${JobFolder}/${JobName}.options
      # If multiple networks on execution host, specify network CIDR 
      # echo "'Desktop/Settings/ProjectOptions/AnsysEMPreferredSubnetAddress'='192.168.1.0/24'" >> ${JobFolder}/${JobName}.options
      echo "\$end 'Config'" >> ${JobFolder}/${JobName}.options
       
      # Submit AEDT Job (SLURM requires 'srun' and tight integration change to the slurm_srun_wrapper.sh 
      ${srun_cmd} ${InstFolder}/ansysedt -ng -monitor -waitforlicense -useelectronicsppe=1 -distributed -machinelist numcores=32 -auto -batchoptions ${JobFolder}/${JobName}.options -batchsolve ${AnalysisSetup} ${JobFolder}/${Project} > ${JobFolder}/${JobName}.progress



Viewing 1 reply thread
  • You must be logged in to reply to this topic.