Ansys Products

Ansys Products

Ansys2021R2 ansys212 seg faults immediately on RHEL8.2

    • payerle

      I am a sysadmin trying to install Ansys2021R2 (fluent, etc) on a RHEL8.2 system. The installation proceeds without error, but after installation when I invoke the ansys212 command I immediately get a segfault. E.g.

      login-3:~: ansys212
      forrtl: severe (174): SIGSEGV, segmentation fault occurred
      Image             PC               Routine           Line       Source           00007F2B19F7D522 for__signal_handl    Unknown Unknown
      libpthread-2.28.s 00007F2AE0D39DD0 Unknown              Unknown Unknown      00007F2ADE55C2A6 strtok_r             Unknown Unknown    00007F2ADD773591 __I_MPI___intel_s    Unknown Unknown    00007F2ADD6358F5 Unknown              Unknown Unknown    00007F2ADD638604 Unknown              Unknown Unknown    00007F2ADD5DC14A Unknown              Unknown Unknown    00007F2ADD5D9D68 PMPI_Init_thread     Unknown Unknown       00007F2AF6F623D7 cMPI_Init_thread     Unknown Unknown       00007F2AF6988C72 cansMPI_Init_thre    Unknown Unknown       00007F2AF6989474 ansmpi_cinit_        Unknown Unknown       00007F2AF698963F ansmpiinitialize_    Unknown Unknown       00007F2AF6C48975 Unknown              Unknown Unknown
      ansys.e           000000000043814E Unknown              Unknown Unknown
      ansys.e           0000000000431068 MAIN__               Unknown Unknown
      ansys.e           0000000000430F62 main                 Unknown Unknown      00007F2ADE4F56A3 __libc_start_main    Unknown Unknown
      ansys.e           0000000000430E74 Unknown              Unknown Unknown
      forrtl: severe (174): SIGSEGV, segmentation fault occurred
      Image             PC               Routine           Line       Source           00007FC5E4BC0522 for__signal_handl    Unknown Unknown
      libpthread-2.28.s 00007FC5AB97CDD0 Unknown              Unknown Unknown      00007FC5A919F2A6 strtok_r             Unknown Unknown    00007FC5A83B6591 __I_MPI___intel_s    Unknown Unknown    00007FC5A82788F5 Unknown              Unknown Unknown    00007FC5A827B604 Unknown              Unknown Unknown    00007FC5A821F14A Unknown              Unknown Unknown    00007FC5A821CD68 PMPI_Init_thread     Unknown Unknown       00007FC5C1BA53D7 cMPI_Init_thread     Unknown Unknown       00007FC5C15CBC72 cansMPI_Init_thre    Unknown Unknown       00007FC5C15CC474 ansmpi_cinit_        Unknown Unknown       00007FC5C15CC63F ansmpiinitialize_    Unknown Unknown       00007FC5C188B975 Unknown              Unknown Unknown
      ansys.e           000000000043814E Unknown              Unknown Unknown
      ansys.e           0000000000431068 MAIN__               Unknown Unknown
      ansys.e           0000000000430F62 main                 Unknown Unknown      00007FC5A91386A3 __libc_start_main    Unknown Unknown
      ansys.e           0000000000430E74 Unknown              Unknown Unknown

      I have verified all prereqs for 64 bit RHEL8 are installed, as per

      This is basically the same error as reported in for 2021R1. Unlike at that time, we have since upgraded to the latest license in our license server, so it should not be a problem this time.

      I get the same error when running ansys212 from the directory containing the ansys212 symlink to mapdl as when running from my home directory.

      Fluent will start up (in either graphical or non-grpahical mode), although I am a sysadmin not an Ansys user so I have not been able to verify as yet that it is actually working.

    • Mike Rife
      Ansys Employee
      let's take MPI out of the equation and see what happens. From your home directory issue:
      ansys212 -smp -np 1
      What happens? Also can you confirm that you checked tables 2.1 and 2.21 (possible needed libraries). Is Intel MPI installed on the system?

    • payerle
      As previously mentioned, I had ensured all prereq libraries listed in the 64 bit RHEL8 column for tables 2.1 through 2.27, inclusive were installed (with the exception of libcurl-minimal from 2.16 as it conflicted with the already installed libcurl and I assumed the latter satisfied your requirements), and the behavior shown above was with those libraries all installed.
      When I run
      ansys212 -smp
      (with or without the additional -np 1 argument), the command starts up (I get a welcome screen, prompted to agree to licensing boilerplate, and then a "BEGIN:" prompt), so the issue does appear to be MPI related.
      As for Intel MPI --- this system is part of an HPC cluster with a large software library controlled using the module command (. Intel Parallel Studio 2020.1, which included Intel MPI, is installed on the cluster, but it is not in a "standard" location (i.e. not under /bin, /lib, /usr/bin, /usr/lib, etc) and so should not normally be found unless the "intel" module is loaded. Similarly, we also have OpenMPI 3.1.5 installed (again, to a non-standard location and should only be "found" if the appropriate module is loaded). (We also have several other proprietary packages like Matlab which might include their own MPI libraries; again these are installed to non-standard locations and their MPI libraries should not be found unless their corresponding module is loaded (and probably not even then)).
      The same segfault when running ansys212 w/out the -smp flag occurs regardless of whether the intel module
      I was assuming that Ansys ships with its own version of whatever MPI libraries it requires. Running strace on the failing ansys212 command shows that it is loading the Ansys provided library, apparently /software/ansys/21.2/Linux64/v212/commonfiles/MPI/Intel/2018.3.222/linx64/lib/release/ (although I also see mention of /software/ansys/21.2/Linux64/v212/commonfiles/CAD/Acis/linx64/

    • Mike Rife
      Ansys Employee
      Thanks for confirming that all the requirement tables were reviewed etc. Just a heads up that I am more product support and not installation. So if I say something really odd, just let me know. I may need to defer to on this (he was replying to your other forum post on same topic, different version). But can you post the environment (all the environment variables). If you don't want to post that can you email it to me direct?
    • payerle
      This is most of my environment (it would not let me post the entire environment due to size restrictions). I have replaced some values with XXXXXX to hide usernames/hostnames/etc, I do not expect that should be a problem for you but let me know if there is a specific concern.
      CPU_COMPAT_MICROARCHS=ivybridge sandybridge westmere nehalem core2 x86_64
      LESSOPEN=||/usr/bin/ %s
      TEXEDIT=vim +%d %s

    • Mike Rife
      Ansys Employee
      I think I found something. We have an internal KM on this, and there are two options, but the Intel link is stale now so I don't yet know what the first option was. The second is to edit the following file (substitute the actual install path of course...and it looks like you don't use the ansys_inc folder either...I'll keep it there in case other run into same issue):
      And change line 2462 from:
      setenv intel_mpi_version "2018.3.222"
      setenv intel_mpi_version "2019.9.304"
      Save the file. Then from you home folder set these environment variables and launch mapdl:
      export LD_LIBRARY_PATH=//ansys_inc/v212/commonfiles/MPI/Intel/2019.9.304/linx64/libfabric/lib
      export I_MPI_FABRICS=shm

      Please test this and let us know what happens.
    • payerle
      Omitted from the previous list are:
      LS_COLORS (which is some 1700 chars on its own)
      some kerberos variables (KRB5CCNAME,KPRINCIPAL)
      some ssh variables (SSH_AUTH_SOC, SSH_CLIENT, SSH_CONNECTION, SSH_TTY)
      some XDG variables (XDG_SESSION_ID,XDG_RUNTIME_DIR)
      These were omitted to reduce space and because I did not think you would find them relevant. Let me know if any are needed
    • Mike Rife
      Ansys Employee
      Just in case you did not see it due to me posting at the same time you were....please see my prior entry.

    • payerle
      Thanks for the poke --- I did indeed miss your previous entry.
      I am confused by the ansys_inc in the paths; I do not see any ansys_inc underneath our Ansys installation. Are we supposed to have such? Or am I just supposed to ignore that path component?
      I have found an anssh.ini file under $PREFIX/v212/ansys/bin that matched the description you gave. I have edited as requested, set the env variables as requested (removing the ansys_inc component in LD_LIBRARY_PATH), and the ansys212 command now starts up and leaves me at the BEGIN: prompt. As mentioned earlier, I am not an ansys user and merely installing on behalf of my users, but at least now I am comfortable letting my users "kick the tires".
      The ansysh.ini file is clearly in csh notation; I do not see an equivalent Bourne shell file so I am assuming that file is used regardless of shell (also since the commands provided to set vars before running ansys212 were in Bourne format). If I am mistaken, please point me to the appropriate files; I need to support users with both tcsh and bash as their default shells. (The env var settings for LD_LIBRARY_PATH, etc. will be added to the module file so will support either --- I am only concerned re the setenv intel_mpi_version).
      Thanks again for all of your assistance on this matter. I am very satisfied with the support I received on this ticket. Assuming no surprises to my questions above (i.e. I can just ignore ansys_inc path component, and nothing additional to support both bash and tcsh), please feel free to close this ticket.
    • payerle
      One additonal question:
      I just realized that setting I_MPI_FABRICS=shm means that this will only work for MPI tasks all on the same node. As this is an HPC cluster I was installing Ansys on, I just wanted to confirm that it does not support running a single job across multiple nodes.
    • Mike Rife
      Ansys Employee
      It is a standard to install to /ansys_inc and I kept that there just in case others found this post. I'll use your $PREFIX going forward. MAPDL does support running a single job across multiple compute nodes. When I was posting I was overly focused on getting the paths right and it did not dawn on me what the consequence of that environment variable would be...but hey the test worked so we are getting somewhere!
      The internal solution that I had found has been updated. There are two options; the first is a small C file that needs to be compiled and a 'export' used to use the library. The second option is 5 edits to that anssh.ini file. Which do you prefer?
    • payerle
      The ansys_inc matter is not an issue --- I just wanted to ensure that I did not do something wrong in the installation (although there are not that many knobs to tweak:)
      Either solution is fine with me. The edits to anssh.ini sounds easier, especially as we already have edited that file once.
    • Mike Rife
      Ansys Employee
      You will need to edit this on every install - hope the cluster has a network install instead of each compute node having it installed locally. The internal answer was for version 2021 R1 so I went through the instructions and changed to 2021 R2...

      Option #2:
      Try Intel MPI 2019.9 included in 2021 R2 installation, backup and edit {installed_path}/v212/ansys/bin/anssh.ini file.
      1 Uncomment line #2090 and #2091 from:
      ##setenv I_MPI_VAR_CHECK_SPELLING "0"
      ##setenv FI_PROVIDER_PATH "${I_MPI_ROOT}/libfabric/lib/prov"
      setenv I_MPI_VAR_CHECK_SPELLING "0"
      setenv FI_PROVIDER_PATH "${I_MPI_ROOT}/libfabric/lib/prov"

      2 Change line # 2462 from:
      setenv intel_mpi_version "2018.3.222"
      to 2019.9.304:
      setenv intel_mpi_version "2019.9.304"

      3 Comment out lines # 2117-2119 from:
      if [ -z "${I_MPI_DYNAMIC_CONNECTION}" ]; then
      setenv I_MPI_DYNAMIC_CONNECTION "no"
      ## if [ -z "${I_MPI_DYNAMIC_CONNECTION}" ]; then
      ## setenv I_MPI_DYNAMIC_CONNECTION "no"
      ## fi

    • payerle
      FYI a) for 2021R2 the lines from (1) (I_MPI_CHECK_SPELLING, et al) were already uncommented. (Presumably that was a change from 2021R1)
      b) (2) (intel_mpi_version) is the edit suggested previously
      c) The rationale behind this edit confuses me. The lines that you requested be commented out were basically turning the variable I_MPI_DYNAMIC_CONNECTION off (set to "no") if it had not been set. But in previous posting you instructed us to set I_MPI_DYNAMIC_CONNECTION to 0 (which according to Intel MPI documentation is equivalent to "no"). So if we set I_MPI_DYNAMIC_CONNECTION (e.g. in our modulefile), those lines are basically no-ops, and if it was not set, those lines set it to what you requested they be set to previously.
      So while I made change (c), I do not see why it should matter --- it does nothing if I_MPI_DYNAMIC_CONNECTION was set, and if it is not set, it sets it to the value you recommended.
    • Mike Rife
      Ansys Employee
      Well, I just found this in the v2021 R2 Help -> Release Notes for Mechanical APDL. I'm not sure if 'failure to start' is the same as 'fails to start with a segmentation error'. And sorry about not finding this earlier. But if possible let's try this: Failure to Launch When Using DMP on Linux
      The program may fail to launch when using DMP on some Linux systems running CentOS/RHEL 8.1, 8.2, or 8.3 or SLES 15.

      Workaround: Issue this command to preload an additional library:

      setenv LD_PRELOAD /ansys_inc/v211/commonfiles/MPI/Intel/2018.3.222/linx64/lib/
      Modify the command syntax as needed if your Linux shell or installation location differ.
    • payerle
      My last comment was just an FYI.
      I have not looked at the above suggestion as it does not appear to be related to an issue we are currently experiencing (and I suspect we would need the from the 2019 version of intel since the previous suggestions instructed us to change the Intel version).
      At this point, things seem to be working and you can close this ticket. I have some users testing Ansys 2021R2 on the system, and they are able to do real tests (as opposed to my tests to just see if the programs start up). If additional issues are encountered that require assistance, I will open a new ticket.
Viewing 15 reply threads
  • You must be logged in to reply to this topic.