Automatic reduction of cores on cluster

    • pierre-luc.piveteau


      I'm running Lumerical simulations on my university cluster.

      I noticed that, in some cases, the Solve engine automatically reduces the number of CPU/processors of my simulation:

      in the Log file, I typically have this output : "number of processors was reduced to 50 from a target of 56".

      Why it is so ? It seems it happens for small FDTD meshes (i.e low total number of Yee cells).

      Is there a way to "force" the number of cores ? In my particular case, there is a financial concern, since I'm paying for the whole node usage (independently of the number of CPU/processors used on the node(s) by the Solve engine)...

      Thank you !

    • Lito Yap
      Ansys Employee
      How are you running the simulation on your cluster? Can you share the command or submission script you use?
      The application will try to use the number of processes depending on your simulation job. If you set your resources too much it will try to reduce this to an appropriate number. Try requesting and running with only 48 cores for this specific simulation file - this might give you better performance for your simulation. Each simulation/project file will require different resource for optimum performance. Over splicing/subscribing your simulation with more p[processes can affect the performance/speed.
      These might help.

Viewing 1 reply thread
  • You must be logged in to reply to this topic.