TAGGED: hfss, hfss-hybrid, hpc, ibm, mpi
-
-
November 30, 2020 at 5:05 pm
mahesh2444
SubscriberHellonI have two machines that are connected through a router with ANSYS EDT 2020R1 installed. I want to perform distributed memory simulations. For this I need to install MPI software(intel or IBM) on both machines. I have installed the intel MPI firstly but it didn't work for me and simulation stopped with progress window indicating n[project name] - HFSSDesign1 - Setup1: Determining memory availability on distributed machines on [target machine name]nDon't know why it hasn't worked for me. When googled about this issue I came across some interesting threads nhttps://forum.ansys.com/discussion/5534/best-way-to-create-a-cluster-of-4-computers-for-ansys-electronics-desktopto-share-memory-and-coresnhttps://forum.ansys.com/discussion/14155/hpc-setup-for-ansys-2020r1nhttps://forum.ansys.com/discussion/10353/mpi-authentication-in-hpc-using-multiple-nodes-in-ansys-electronicsnhttps://forum.ansys.com/discussion/7313/hfss-hpc-setup-issuesnAll of these contain a magical six step procedure which says to use IBM Platform computing MPI. So I have removed the intel MPI libraries from pc and installed the IBM MPI which comes with the installation. nIn order to check whether this one helps in setting up distributed simulation I have followed the test mentioned in one of the above mentioned threads.n%MPI_ROOT%\bin\mpirun -hostlist localhost:2,:2 %ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exenBut this method too didn't work for me and throw some more errors which I didn't saw in the forum.nC:\Program Files (x86)\IBM\Platform-MPI\bin>%MPI_ROOT%\bin\mpirun -pass -hostlist localhost:2, :2 %ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exe nPassword for MPI runs: nmpirun: Drive is not a network mapped - using local drive. nmpid: PATH=C:\Program Files (x86)\IBM\Platform MPI\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\ :\Matlab install\runtime\win64;\Matlab install\bin;C:\Users\HP\AppData\Local\Microsoft\WindowsApps; nmpid: PWD=C:\Program Files (x86)\IBM\Platform-MPI\bin nmpid: CreateProcess failed: Cannot execute C:\Program Files (x86)\IBM\Platform-MPI\bin\%ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exe nmpirun: Unknown error nI want to know if I have to perform any user registration for the Platform MPI to work on my machines. If yes please let me know how to do it.nIf someone knows the solution please reply to this question.nThanks nMahesh
-
December 1, 2020 at 11:30 am
mahesh2444
SubscriberHello Array , Array , Array , Array , Array nAn update on my question. Among the two machines (DESKTOP-CLH2LM1-->(A), DESKTOP-B4I9FQ7-->(). nWhen I run the test with A as localhost and B as the other machine the MPI testing command results in Hello world! output indicating a good connection between A & B.nC:\Users\Mahesh>%MPI_ROOT%\bin\mpirun -pass -hostlist localhost:2,DESKTOP-B4I9FQ7:2 n%ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exenPassword for MPI runs:nmpirun: Drive is not a network mapped - using local drive.nHello world! I'm rank 0 of 4 running on DESKTOP-CLH2LM1nHello world! I'm rank 1 of 4 running on DESKTOP-CLH2LM1nHello world! I'm rank 2 of 4 running on DESKTOP-B4I9FQ7nHello world! I'm rank 3 of 4 running on DESKTOP-B4I9FQ7nBut when I tried to run the same MPI testing command with B as localhost and A as other machine, following output is obtained in command prompt window.nC:\Users\HP>%MPI_ROOT%\bin\mpirun -pass -hostlist localhost:2,DESKTOP-CLH2LM1:2n%ANSYSEM_ROOT201%\schedulers\diagnostics\Utils\pcmpi_test.exe nPassword for MPI runs: nmpirun: Drive is not a network mapped - using local drive. nERR-Client: InitializeSecurityContext failed (0x80090308) nERR - Client Authorization of socket failed. nCommand sent to service failed. nmpirun: ERR: Error adding task to job (-1). nmpirun: mpirun_mpid_start: thread 19792 exited with code -1 nmpirun: mpirun_winstart: unable to start all mpid processes. nmpirun: Unable to contact remote service or mpid nmpirun: An mpid process may still be running on DESKTOP-CLH2LM1 nI want to know why the output is like this and what settings do I have to make for getting same output as described earlier in this comment. nFor testing this distributed simulation feature I have started simulation of Helical_Antenna { available in examples (it is advised to consider this simulation as test case ANSYS 2020 R1 Help) } on Machine A. I have setup analysis configuration consisting of two machine with Machine B being the first one among the list followed by localhost. nBut the simulation steps like meshing and solving are only performed in Machine B and didn't used any of the hardware available in Machine A. Why this occurred ?nWhat settings do I need to modify for using both machines in the simulation ?nP.S: Machine A has Windows 10 Pro OS while Machine B has Windows 10 Home OS installed. Also there is one generation difference between processors on both machines. I have disabled the firewalls completely on both machines. They are on the Domain WorkGroupnnThanksnMaheshn
-
December 8, 2020 at 2:43 pm
ANSYS_MMadore
Ansys EmployeeHello Mahesh, do you have the same username & password on each machine? Please note, MPI requires machines to be on a domain, it does not support a Workgroup environment so this may not work exactly as expected.nRegarding using B first and localhost second in the list, the first listed machine will be responsible for meshing and adaptive passes prior to distribution to the other machines listed.nnThanks,nMattn -
December 12, 2020 at 2:38 pm
mahesh2444
SubscribernYes, all the machines have same username & password.nMay I know what should be the Workgroup name ?nAlso I have observed that sweep frequencies are getting solved locally rather than distributed. Isn't this feature available ?nThanksnMaheshn -
December 14, 2020 at 1:25 pm
ANSYS_MMadore
Ansys EmployeeArrayThere is no special requirement for the workgroup name. Can you share a screenshot of the HPC and Analysis Settings you are currently using? Please also click on each of the machines listed in the settings and select Test Machines and share the output.nnThank you,nMattn -
December 25, 2020 at 11:24 am
mahesh2444
Subscriber -
December 26, 2020 at 2:44 pm
mahesh2444
SubscriberHi Array nI would like to know whether it is possible for solving a single sweep frequency in distributed manner on two machines simultaneously. I will try to convey my need through the following scenario.nI am trying to simulate an array antenna at 25GHz having dimensions of 70 x 20 mm. I have unchecked the automatic settings in HPC and Analysis Settings and set one task per each machines shown in above image. During adaptive meshing it has used both machines and computed the mesh passes as per the convergence criteria. (Total memory used by 2 Distributed Processes : 9.2GB memory). But before starting sweep frequencies it has stopped the simulation with the message similar to following :nsweep frequencies require 5.9GB memory per task and requires 11GB memory in total.nBut I have a total of 12.6GB memory available in combined. When I tried to re simulate the above design, simulation got completed by consuming 6.27GB memory per each sweep frequency stating that switching to mixed precision to save memory. During re simulation only one machine (first one in the list) was used for solving sweep frequencies. nWhy the second machine in the list hasn't been used for solving sweep frequencies ? nWhen automatic settings were enabled simulation never got completed stating more memory is needed.nSo my another question is to know whether it is possible for HFSS to solve the sweep frequency that would require 12GB memory per each one in a distributed manner just as happened with adaptive meshing process. nThanksnMaheshn -
December 30, 2020 at 2:19 pm
ANSYS_MMadore
Ansys EmployeeArray Can you try solving C:\Program Files\AnsysEM\AnsysEM20.1\Win64\schedulers\diagnostics\Projects\HFSS\OptimTee-DiscreteSweep.aedt to test your setup? This will confirm if the Sweep in Setup1 will distribute across both machines.nnThanks,nMattn -
December 31, 2020 at 3:57 am
mahesh2444
SubscriberThanks it's working with direct solver, Would this distribution work the same way with domain solver too ?nThanks nMaheshn -
December 31, 2020 at 1:26 pm
ANSYS_MMadore
Ansys EmployeeYes, it should.n -
January 1, 2021 at 2:24 pm
mahesh2444
SubscriberHi ArraynI am performing reflectarray simulations using Domain solver.and Horn are separated by creating FE-BI boxes surrounding each of them. For clear picture of my simulation setup see this Youtube VideonVideo summarynHFSS using Hybrid technique to implemented here. The entire simulation domain is divided into two FEBI region and we can avoid mesh between horn antenna and reflectarray. Hence reducing simulation time and memory consumption.nWhen I tried to simulate the above described setup, only first machine in the list is getting used for adaptive meshing and second machine remains idle. Eventually, this causes Out of memory issue leading to abrupt termination of simulation. nHow could I make my two machines to be used for adaptive meshing just happened with the case of direct solver ? Please help me.nThanks nMaheshn -
January 2, 2021 at 6:13 am
mahesh2444
SubscribernCould you please look at my other questionnIssue with Domain solver still persists in case of MPI computing.nThanks nMaheshn -
January 4, 2021 at 2:05 pm
mahesh2444
SubscriberHi ArraynMy analysis setup is shown below along with the message displayed in analysis configuration.nJanuary 5, 2021 at 1:39 pmANSYS_MMadore
Ansys EmployeeArrayI have received this feedback. In short that is the way HFSS works. DDM allows the whole problem to be divided after an initial mesh is created that is why we see meshing on only 1 compute node. After meshing is completed HFSS now knows where to divide the objects for further analysis and solve. Very very top level the objects are divided where mesh is minimal. The objects are not divided by geometry parts but electrically through the mesh. The mesh is generated by determination of the Electric field so this initial mesh in necessary on one node before it can be split.nnPlease let me know if this helps to explain the difference.nnThanksnMattnJanuary 5, 2021 at 1:50 pmmahesh2444
SubscriberI will check this and get back to you mattnJanuary 22, 2021 at 7:01 ammahesh2444
SubscribernCould you please look at my other questionnThanksnMaheshnViewing 16 reply threads- The topic ‘Facing issues while setting up Distributed memory simulations in ANSYS EDT HFSS 2020R1’ is closed to new replies.
-

Boost Ansys Fluent Simulations with AWS
Computational Fluid Dynamics (CFD) helps engineers design products in which the flow of fluid components is a significant challenge. These different use cases often require large complex models to solve on a traditional workstation. Click here to join this event to learn how to leverage Ansys Fluids on the cloud, thanks to Ansys Gateway powered by AWS.

Earth Rescue – An Ansys Online Series
The climate crisis is here. But so is the human ingenuity to fight it. Earth Rescue reveals what visionary companies are doing today to engineer radical new ideas in the fight against climate change. Click here to watch the first episode.

Ansys Blog
Subscribe to the Ansys Blog to get great new content about the power of simulation delivered right to your email on a weekly basis. With content from Ansys experts, partners and customers you will learn about product development advances, thought leadership and trends and tips to better use Ansys tools. Sign up here.
- Applying rotor skew
- Effect of region on force calculation
- How to make the available GPU on my Desktop to be used for direct solver simulations in ANSYS HFSS ?
- Parasolid entity check failed for part
- HFSS Parasolid Error
- Induction Heating Simulation
- Co-simulation between HFSS and structural/workbench
- IBIS AMI models
- Radial and Tangential force
- example files
-
8808
-
4658
-
3153
-
1680
-
1470
© 2023 Copyright ANSYS, Inc. All rights reserved.