Lumerical Python API doesn't Use the full capacity of the assigned CPU in HPC
Hi Lumerical Team
I tried to run Lumerical python API on HPC. The process is:
Running Python on HPC --> Python creates pattern--> Lumerical python API read the pattern and create simulation file through .lsf script ----> Running created simulation file on HPC ---> Python read the results.
For this process, we have assigned 4 nodes with 24 CPUs and 24 microprocessors to the job which are 96 microprocessors and CPUs in total. However, while running the simulation, it will only use 8 microprocessors which results in a long simulation time. Also, during this process, only one "lum_fdtd_solve" will be occupied.
For the other case where we create our simulation file on a local desktop and just use the lum_fdtd_solve to solve the simulation, it will use the whole assigned capacity.
Could you help me with this?