Post Processing APDL Script on the Cloud

Hello,

We have a customer who uploaded his job which contains some APDL script (.mac file) for post processing and he found that the post processing part does not complete by the Cloud. For their application the APDL post processing script could take up to hours and they wish to leverage the calculating power of the Cloud for such a purpose.

To be exact, the .mac file can be submitted, read and processed by the Cloud such that the downloaded results are script-processed.

Tagged:

Comments

  • Hello,

    Do you happen to have the project that you can share? Or the Mechanical APDL output file? Out of curiosity why do they think that the Cloud would be faster for post-processing? Will you be able to share the job or the output file?

    We will be happy to help.

    Thanks.

  • Hi,

     I'll have to ask the customer. The reason they want this is because they are post-processing the results (recalculate the stress from the nodal level) with their own .mac script and that can be somewhat time consuming for certain models (say, more than 5 hours). If the Cloud is capable of carrying out the post processing using distributed technology, that might help. 

     I'll see if the job and .mac file is available to share. Thanks for the reply on this topic.

    Thanks.

  • Hello,

    Please have the customer review the following help section: Ansys Help -> Mechanical APDL -> Parallel Processing Guide -> Chapter 4 Using Distributed ANSYS. 

    Especially the section on 'Distributed ANSYS Behavior'. Only certain APDL commands are communicated from the 'master' process to the other core ('slave') processes. During the solve process the calculation of derived results (strains, stresses etc) are done by each core process. But post-processing results runs in shared-memory mode even when the solution was done in distributed parallel mode. So no, the Ansys Cloud will not carry out their custom post-processing calculations in distributed parallel mode. 

    Now, maybe the Ansys Cloud compute node hardware is...'better' than the customers hardware so that it runs these post-processing calculations faster in shared memory parallel mode. You can compare the customers hardware to the Azure compute nodes used (let me know if you don't have the link to this) but unless their hardware is lacking (old, slow hard drive, or maybe a lack of sufficient RAM, etc) I'd not expect much difference in running a post-processing macro. 

    I hope this helps.

    Thanks.

Sign In or Register to comment.