Compiling with MPI and Intel 19.0

option, parallelism,...

Moderators: jbeuken, Jordan, pouillon

Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit builds.
For a video explanation on how to build Abinit for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.

Compiling with MPI and Intel 19.0

Postby frodo » Mon Mar 04, 2019 12:13 am

Hello,

I'm having trouble running the abinit (8.10.2) executable after compiling with the Intel 19.0 compilers and with MPI enabled (64 bit intel).

If I compile with either the gnu tools (gcc, gfortran 7.3.0) or the Intel tools (icc, ifort), and without MPI enabled, make check shows all fast tests succeed.

If I compile with the gnu tools (gcc, gfortran) and MPI enabled, make check runs abinit without mpirun and all fast tests succeed.

If I compile with the Intel tools (mpiicc, mpiifort) and MPI enabled, make check still runs abinit without mpirun and all fast tests fail with the following error:

forrtl: severe (24): end-of-file during read, unit 5, file /proc/19230/fd/0
Image PC Routine Line Source
libifcoremt.so.5 000014A9FFCA97B6 for__io_return Unknown Unknown
libifcoremt.so.5 000014A9FFCE7C00 for_read_seq_fmt Unknown Unknown
abinit 00000000015B9312 Unknown Unknown Unknown
abinit 0000000000409DEF Unknown Unknown Unknown
abinit 0000000000409B22 Unknown Unknown Unknown
libc-2.27.so 000014A9FD6E1B97 __libc_start_main Unknown Unknown
abinit 0000000000409A0A Unknown Unknown Unknown

If I compile with the Intel tools and MPI enabled and run "runtests.py fast --force-mpirun", then abinit is run with "mpirun -np 1" and all tests succeed.

My understanding is that executables compiled with mpiifort must be run with mpirun even if np=1.

It seems that runtests.py tries to run serial tests without mpirun. This seems to work when abinit is compiled with the gnu tools, but not when compiled with the intel tools.

Is this a known difference in behavior for mpi executables compiled with the gnu tools vs the intel tools? If so, why doesn't runtests.py use mpirun for the intel compiled executable even for serial tests. Or am I doing something wrong?

Also, when compiling with Intel and MPI, setting with_mpi_incs and with_mpi_libs has no effect. They are not used in the compilation. I assume this is because mpiifort is a wrapper that is supplying these. Is that correct?

I am using Intel Parallel Studio XE and source psxevars.sh to set the environment before compiling/running with intel.

Thanks for any suggestions.
frodo
 
Posts: 9
Joined: Mon Feb 11, 2019 8:35 pm

Re: Compiling with MPI and Intel 19.0

Postby ebousquet » Thu Mar 07, 2019 6:24 pm

Dear Frodo,
You can type "./runtests.py -h" to see all the available options.
For the parallel tests, you have the option runtests.py paral -n XX with XX the number of CPU on which you want to run the mpi. See the other options to force some executable, etc.
Best wishes,
Eric
ebousquet
 
Posts: 239
Joined: Tue Apr 19, 2011 11:13 am
Location: University of Liege, Belgium

Re: Compiling with MPI and Intel 19.0

Postby frodo » Thu Mar 07, 2019 8:52 pm

Hi Eric,

Thank you for the reply.

Yes, I know about runtests.py and use it all the time.

My question was why running "make check" fails with the indicated error when abinit is compiled with Intel 19.0 (mpiifort), where "make check" does not fail when abinit is compiled with gnu (mpif90). In both cases, "make check" invokes runtests.py to run abinit directly, i.e., without mpirun. This works for the gnu compiled version but fails with the error I indicated for the Intel compiled version.

Let me try to make what I am saying more clear:

Compiling with GNU:

I compile abinit with the following config.ac:

Code: Select all
enable_debug="no"
enable_avx_safe_mode="no"
prefix="/usr/local/abinit"
enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
with_mpi_prefix="/usr"
enable_gpu="no"


Configure automatically sets CC=mpicc and FC=mpif90.

I run "abinit < test.stdin > test.sdtout 2> test.stderr". This executes normally.

Compiling with Intel 19.0:

I compile abinit with the following config.ac:

Code: Select all
enable_debug="no"
enable_avx_safe_mode="no"
prefix="/usr/local/abinit"
CC="mpiicc"
CXX="mpiicpc"
FC="mpiifort"
enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
enable_gpu="no"


I run "abinit < test.stdin > test.stdout 2> test.stderr". This fails with the following error:

Code: Select all
forrtl: severe (24): end-of-file during read, unit 5, file /proc/19230/fd/0
Image PC Routine Line Source
libifcoremt.so.5 000014A9FFCA97B6 for__io_return Unknown Unknown
libifcoremt.so.5 000014A9FFCE7C00 for_read_seq_fmt Unknown Unknown
abinit 00000000015B9312 Unknown Unknown Unknown
abinit 0000000000409DEF Unknown Unknown Unknown
abinit 0000000000409B22 Unknown Unknown Unknown
libc-2.27.so 000014A9FD6E1B97 __libc_start_main Unknown Unknown
abinit 0000000000409A0A Unknown Unknown Unknown


However, if I run "mpirun -np 1 abinit < test.stdin > test.stdout 2> test.stderr", it execute normally.

In other words, the intel compiled version has to be executed with mpirun, even when np=1. But the gnu compiled version can be executed directly, without mpirun.

I know I can force runtests.py to use mpirun to invoke abinit (--force-mpirun) but why do I have to do this even for np=1 when I am testing an intel compiled executabe but I do not have to do this (i.e., don't have to use --force-mpirun) for a gnu compiled executable?

What is the reason for this difference in behavior?
frodo
 
Posts: 9
Joined: Mon Feb 11, 2019 8:35 pm

Re: Compiling with MPI and Intel 19.0

Postby jbeuken » Fri Mar 08, 2019 12:07 pm

Hi,

quickly: I don't reproduce the behavior you observe :roll:

Code: Select all
[root@yquem fast_t01]# mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.4.196 Build 20170411
Copyright (C) 1985-2017 Intel Corporation.  All rights reserved.
FOR NON-COMMERCIAL USE ONLY


Code: Select all
../../../src/98_main/abinit < t01.stdin > OUT

tail t01.out
- Comment : the original paper describing the ABINIT project.
- DOI and bibtex : see https://docs.abinit.org/theory/bibliography/#gonze2002
-
- Proc.   0 individual time (sec): cpu=          0.1  wall=          0.1

================================================================================

 Calculation completed.
.Delivered   6 WARNINGs and  10 COMMENTs to log file.
+Overall time at end (sec) : cpu=          0.1  wall=          0.1



my .ac file :

Code: Select all
CC="mpiicc"
CXX="mpiicpc"
FC="mpiifort"
FCFLAGS_EXTRA="-g -O3 -align all"

enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
with_trio_flavor=none
with_dft_flavor=none

#I_MPI_ROOT=/opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/
with_mpi_incs="-I${I_MPI_ROOT}/include64"
with_mpi_libs="-L${I_MPI_ROOT}/lib64 -lmpi"

with_fft_flavor="fftw3-mkl"
with_fft_incs="-I${MKLROOT}/include"
with_fft_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"
with_linalg_flavor="mkl"
with_linalg_incs="-I${MKLROOT}/include"
with_linalg_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"


jmb
User avatar
jbeuken
 
Posts: 304
Joined: Tue Aug 18, 2009 9:24 pm

Re: Compiling with MPI and Intel 19.0

Postby frodo » Sun Mar 10, 2019 8:02 pm

Hi,

Thanks for the reply.

My mpiifort:

Code: Select all
mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.1.144 Build 20181018
Copyright (C) 1985-2018 Intel Corporation.  All rights reserved.



I recompiled using exactly your config.ac

Code: Select all
CC="mpiicc"
CXX="mpiicpc"
FC="mpiifort"
FCFLAGS_EXTRA="-g -O3 -align all"

enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
with_trio_flavor=none
with_dft_flavor=none

#I_MPI_ROOT=/opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/
with_mpi_incs="-I${I_MPI_ROOT}/include64"
with_mpi_libs="-L${I_MPI_ROOT}/lib64 -lmpi"

with_fft_flavor="fftw3-mkl"
with_fft_incs="-I${MKLROOT}/include"
with_fft_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"
with_linalg_flavor="mkl"
with_linalg_incs="-I${MKLROOT}/include"
with_linalg_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"


A comment on this is that apparently Intel 19 changed the library structure for MPI. MPI Libs are in $[I_MPI_ROOT]/intel64/lib, not ${I_MPI_ROOT}lib64. Also, the mpi library is now in a subdirectory of this: ${I_MPI_ROOT}/intel64/lib/release. See: https://github.com/spack/spack/issues/9913

Also, with_mpi_incs and with_mpi_libs are not used. Here is the actual compilation line emitted by make:

Code: Select all
mpiifort -DHAVE_CONFIG_H -I. -I../../../src/98_main -I../..  -I../../src/incs -I../../../src/incs -I/home/dierker/abinit-8.10.2/build/fallbacks/exports/include  -I/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/include -I/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/include   -free -module /home/dierker/abinit-8.10.2/build/src/mods   -O3 -g -extend-source -noaltparam -nofpscomp -g -O3 -align all   -g -extend-source -noaltparam -nofpscomp -g -O3 -align all  -c -o abinit-abinit.o `test -f 'abinit.F90' || echo '../../../src/98_main/'`abinit.F90
mpiifort -free -module /home/dierker/abinit-8.10.2/build/src/mods   -O3 -g -extend-source -noaltparam -nofpscomp -g -O3 -align all   -g -extend-source -noaltparam -nofpscomp -g -O3 -align all  -static-intel -static-libgcc  -static-intel -static-libgcc  -o abinit abinit-abinit.o -static-intel -static-libgcc  ../../src/95_drive/lib95_drive.a ../../src/94_scfcv/lib94_scfcv.a ../../src/79_seqpar_mpi/lib79_seqpar_mpi.a ../../src/78_effpot/lib78_effpot.a ../../src/78_eph/lib78_eph.a ../../src/77_ddb/lib77_ddb.a ../../src/77_suscep/lib77_suscep.a ../../src/72_response/lib72_response.a ../../src/71_bse/lib71_bse.a ../../src/71_wannier/lib71_wannier.a ../../src/70_gw/lib70_gw.a ../../src/69_wfdesc/lib69_wfdesc.a ../../src/68_dmft/lib68_dmft.a  ../../src/68_recursion/lib68_recursion.a ../../src/68_rsprc/lib68_rsprc.a  ../../src/67_common/lib67_common.a ../../src/66_vdwxc/lib66_vdwxc.a ../../src/66_wfs/lib66_wfs.a ../../src/66_nonlocal/lib66_nonlocal.a ../../src/65_paw/lib65_paw.a  ../../src/64_psp/lib64_psp.a ../../src/62_iowfdenpot/lib62_iowfdenpot.a ../../src/62_wvl_wfs/lib62_wvl_wfs.a ../../src/62_poisson/lib62_poisson.a ../../src/62_cg_noabirule/lib62_cg_noabirule.a ../../src/62_ctqmc/lib62_ctqmc.a ../../src/61_occeig/lib61_occeig.a ../../src/59_ionetcdf/lib59_ionetcdf.a ../../src/57_iovars/lib57_iovars.a ../../src/57_iopsp_parser/lib57_iopsp_parser.a ../../src/56_recipspace/lib56_recipspace.a ../../src/56_xc/lib56_xc.a ../../src/56_mixing/lib56_mixing.a ../../src/56_io_mpi/lib56_io_mpi.a ../../src/55_abiutil/lib55_abiutil.a ../../src/54_spacepar/lib54_spacepar.a ../../src/53_ffts/lib53_ffts.a  ../../src/52_fft_mpi_noabirule/lib52_fft_mpi_noabirule.a ../../src/51_manage_mpi/lib51_manage_mpi.a ../../src/49_gw_toolbox_oop/lib49_gw_toolbox_oop.a ../../src/46_diago/lib46_diago.a ../../src/45_xgTools/lib45_xgTools.a ../../src/45_geomoptim/lib45_geomoptim.a ../../src/44_abitypes_defs/lib44_abitypes_defs.a ../../src/44_abitools/lib44_abitools.a ../../src/43_wvl_wrappers/lib43_wvl_wrappers.a ../../src/43_ptgroups/lib43_ptgroups.a ../../src/42_parser/lib42_parser.a ../../src/42_nlstrain/lib42_nlstrain.a ../../src/42_libpaw/lib42_libpaw.a ../../src/41_xc_lowlevel/lib41_xc_lowlevel.a ../../src/41_geometry/lib41_geometry.a ../../src/32_util/lib32_util.a ../../src/29_kpoints/lib29_kpoints.a ../../src/28_numeric_noabirule/lib28_numeric_noabirule.a ../../src/27_toolbox_oop/lib27_toolbox_oop.a ../../src/21_hashfuncs/lib21_hashfuncs.a ../../src/18_timing/lib18_timing.a ../../src/17_libtetra_ext/lib17_libtetra_ext.a ../../src/16_hideleave/lib16_hideleave.a  ../../src/14_hidewrite/lib14_hidewrite.a ../../src/12_hide_mpi/lib12_hide_mpi.a ../../src/11_memory_mpi/lib11_memory_mpi.a ../../src/10_dumpinfo/lib10_dumpinfo.a ../../src/10_defs/lib10_defs.a ../../src/02_clib/lib02_clib.a  -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64 -Wl,--start-group  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64 -Wl,--start-group  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl -lrt -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/release -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib -L/opt/intel/clck/2019.0/lib/intel64 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/../lib/ -L/usr/lib/gcc/x86_64-linux-gnu/7/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib/ -L/lib/x86_64-linux-gnu/ -L/lib/../lib64 -L/lib/../lib/ -L/usr/lib/x86_64-linux-gnu/ -L/usr/lib/../lib/ -L/opt/intel/clck/2019.0/lib/intel64/ -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../ -L/lib64 -L/lib/ -L/usr/lib -L/usr/lib/i386-linux-gnu -lmpifort -lmpi -ldl -lrt -lpthread -lifport -lifcoremt -limf -lsvml -lm -lipgo -lirc -lirc_s -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/release -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib -L/opt/intel/clck/2019.0/lib/intel64 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/../lib/ -L/usr/lib/gcc/x86_64-linux-gnu/7/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib/ -L/lib/x86_64-linux-gnu/ -L/lib/../lib64 -L/lib/../lib/ -L/usr/lib/x86_64-linux-gnu/ -L/usr/lib/../lib/ -L/opt/intel/clck/2019.0/lib/intel64/ -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../ -L/lib64 -L/lib/ -L/usr/lib -L/usr/lib/i386-linux-gnu -lmpifort -lmpi -ldl -lrt -lpthread -lifport -lifcoremt -limf -lsvml -lm -lipgo -lirc -lirc_s


It appears that the mpiifort wrapper has set the libraries to link against and the with_mpi_libs specified in the config.ac file has been ignored.

Here is what I get when I run abinit directly:

Code: Select all
../../../src/98_main/abinit < t01.stdin > OUT
forrtl: severe (24): end-of-file during read, unit 5, file /proc/33337/fd/0
Image              PC                Routine            Line        Source             
libifcoremt.so.5   00007F49529B97B6  for__io_return        Unknown  Unknown
libifcoremt.so.5   00007F49529F7C00  for_read_seq_fmt      Unknown  Unknown
abinit             00000000018A6119  Unknown               Unknown  Unknown
abinit             0000000000407C49  Unknown               Unknown  Unknown
abinit             0000000000407942  Unknown               Unknown  Unknown
libc-2.27.so       00007F49503F1B97  __libc_start_main     Unknown  Unknown
abinit             000000000040782A  Unknown               Unknown  Unknown

tail -25 OUT
  ABINIT 8.10.2
 
  Give name for formatted input file:


Here is what I get when I run abinit via mpirun:

Code: Select all
mpirun -np 1 ../../../src/98_main/abinit < t01.stdin > OUT

tail -25 OUT
- Computational Materials Science 25, 478-492 (2002). http://dx.doi.org/10.1016/S0927-0256(02)00325-7
- Comment : the original paper describing the ABINIT project.
- DOI and bibtex : see https://docs.abinit.org/theory/bibliography/#gonze2002
 Proc.   0 individual time (sec): cpu=          0.1  wall=          0.1
 
 Calculation completed.
.Delivered   6 WARNINGs and   8 COMMENTs to log file.

--- !FinalSummary
program: abinit
version: 8.10.2
start_datetime: Sun Mar 10 09:32:31 2019
end_datetime: Sun Mar 10 09:32:31 2019
overall_cpu_time:           0.1
overall_wall_time:           0.1
exit_requested_by_user: no
timelimit: 0
pseudos:
    H   : eb3a1fb3ac49f520fd87c87e3deb9929
usepaw: 0
mpi_procs: 1
omp_threads: 1
num_warnings: 6
num_comments: 8
...


So maybe this is a difference between Intel 19 and Intel 17? Unfortunately, I don't have Intel 17 installed on my system to test that.
frodo
 
Posts: 9
Joined: Mon Feb 11, 2019 8:35 pm

Re: Compiling with MPI and Intel 19.0

Postby frodo » Sun Mar 10, 2019 9:04 pm

I added -traceback to FC_FLAGS_EXTRA and now get the file and linenumber where abinit is failing:

Code: Select all
../../../src/98_main/abinit < t01.stdin > OUT-traceback
forrtl: severe (24): end-of-file during read, unit 5, file /proc/26824/fd/0
Image              PC                Routine            Line        Source             
libifcoremt.so.5   00007F0847FAC7B6  for__io_return        Unknown  Unknown
libifcoremt.so.5   00007F0847FEAC00  for_read_seq_fmt      Unknown  Unknown
abinit             000000000187BC1F  m_dtfil_mp_iofn1_        1363  m_dtfil.F90
abinit             0000000000407C49  MAIN__                    251  abinit.F90
abinit             0000000000407942  Unknown               Unknown  Unknown
libc-2.27.so       00007F08459E4B97  __libc_start_main     Unknown  Unknown
abinit             000000000040782A  Unknown               Unknown  Unknown


Line 1363 in m_dtfil.F90 is just a straightforward read (the preceeding write succeeds, as you can see in the OUT file I posted)

Code: Select all
!  Read name of input file (std_in):
   write(std_out,*,err=10,iomsg=errmsg)' Give name for formatted input file: '
   read(std_in, '(a)',err=10,iomsg=errmsg ) filnam(1)
frodo
 
Posts: 9
Joined: Mon Feb 11, 2019 8:35 pm

Re: Compiling with MPI and Intel 19.0

Postby frodo » Mon Mar 11, 2019 5:51 am

I tried upgrading to from Intel Parallel Studio XE Cluster Edition 2019 Update 1 to Intel Parallel Studio XE Cluster Edition 2019 Update 3.

Now:

Code: Select all
mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.3.199 Build 20190206
Copyright (C) 1985-2019 Intel Corporation.  All rights reserved.


I also tried compiling with enable_mpi_io="no".

Neither change made any difference. I still get forrtl severe (24) unless I run abinit with mpirun.
frodo
 
Posts: 9
Joined: Mon Feb 11, 2019 8:35 pm

Re: Compiling with MPI and Intel 19.0

Postby ebousquet » Wed Mar 13, 2019 12:51 pm

Hi Frodo,
So, unless somebody else can comment on that, it sounds like we don't really know what's wrong here, but since it works by calling mpirun -np 1, in the meantime just run it like that (it'll probably the same for all other exec, e.g. anaddb, etc)... Otherwise recompile another exec in sequential...
Best wishes,
Eric
ebousquet
 
Posts: 239
Joined: Tue Apr 19, 2011 11:13 am
Location: University of Liege, Belgium


Return to Configuration

Who is online

Users browsing this forum: No registered users and 2 guests