I am trying to run different files but I got the attached error. I ran them before without any problems.
Thank you for raising this interesting question. It is better to have a separate post since it is not the topic of this post. But I will reply here.
I would not think that the different MPI can affect the accuracy, except due to a possible bug.
However, the simulation efficiency, or the total cpu time consumed will depend on the MPI algorithm, hardware and the operation system, as well as simulation file, which is complicated. As you can see from the memory check, a simulation file needs different memory for
Initialization and mesh
Monitor data saved to fsp file
Depending on the simulation files, other parameters (the above mentioned) related with the computer doing the simulation will affect the simulation time, for the same simulation file. We have a description on the parallel computing: https://support.lumerical.com/hc/en-us/articles/360026321353-Distributed-computing
However, I could not the original white paper about the distributed computing, except I kept a Chinese post here (Ansys Insight: 关于FDTD 并行计算的有关问题 ) . You do not need to read the unknown characters in the posts. but just look at the images. for example this one:
each cube is one sub-volume to be simulated in each node/process, which is from the original whole simulation volume (from simulation boundary to boundary). As you may guess, at each time step, each cube will need to interchange data for the neighbor cubes, and there must be a management node/process to control this. The whole process is controlled by the MPI. Depending on the hardware (data buses, bandiwdth etc), the different MPI may deal with this differently. Therefore it leads to different CPU time. Most of the time the cpu time is about the same, so we do not notice the difference. However, for some devices/simulation files/hardware/OS, such difference can be huge, like the case you have right now.
In summary, it is known "issue", and you may choose one MPI for your computer that gives the fastest simulation. In case you find significant different result, please send us an email or post the result here in the forum so we can have a close look.
This discussion above of course disregards different FDTD codes, which can also differ significantly.
Please right-click the error massage and give the screenshot of it. Before doing so, I would suggest that you run a downloaded example and see if you can run it successfully, for example this one: https://support.lumerical.com/hc/en-us/articles/360042703373-Mie-scattering-2D
I ran the file you sent and I got the same error. I have attached the detailed error.
It might be due to MPI issue ( Failed to post close command error 1726 ). Do you use Microsoft PML? please change to Inter MPI, Or MPICH2:
I am using remote license. If your license is in local computer, you can choose "local computer". Or you can try different MPI that is good for you.
If simulation worked fine before, you may need to restart your computer; download microsoft updates; or reinstall Lumerical products.
I tired both ''Local computer'' and ''Intel MPI'' and both are working. However, there's a huge difference in the 'Max time remaining' In the 'local computer' the simulation time was 5 hours while Intel MPI the time was only more than one hour. Does the choice affect the accuracy of the results? Also, which one do you recommend please?
Thanks for your reply!
Ansys customers with active commercial software licenses can access the customer portal and submit support questions. You will need your active account number to register.