help MPI support on dual CPU server

option, parallelism,...

Moderators: fgoudreault, mcote

Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Locked
timber
Posts: 6
Joined: Fri Dec 12, 2014 2:13 am

help MPI support on dual CPU server

Post by timber » Fri Dec 12, 2014 2:29 am

hi, I have a lenovo D20 server, dual CPUs and 12 cores.
The OS is centos7, with epel repo
I configure all the requeirements from vendor and epel repo, and I can run through the tests under serial mode.

but when I compile with MPI support, it always failed. the terminal message is:




==============================================================================
=== Multicore architecture support ===
==============================================================================

checking whether to enable OpenMP support... yes
checking Fortran flags for OpenMP... -fopenmp
checking whether OpenMP's COLLAPSE works... yes
configure: OpenMP support is enabled in Fortran source code only
checking whether to build MPI code... yes
checking whether the C compiler supports MPI... no
checking whether the C++ compiler supports MPI... no
checking whether the Fortran Compiler supports MPI... yes
checking whether MPI is usable... no
configure: error: MPI support is broken - please fix your config parameters and/or MPI installation
[root@c64 abinit-7.10.1]# which mpicc
/usr/lib64/openmpi/bin/mpicc
[root@c64 abinit-7.10.1]# mpicc --version
gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.






I was confused. if there is something or tricks for a dual CPUs configuration?
Attachments
config.log
(90.87 KiB) Downloaded 273 times

User avatar
jbeuken
Posts: 365
Joined: Tue Aug 18, 2009 9:24 pm
Contact:

Re: help MPI support on dual CPU server

Post by jbeuken » Fri Dec 12, 2014 8:09 pm

Hi,

two questions and one idea…

1) what is the content of the file : /root/.abinit/build/c64.ac

2) results of commands :

Code: Select all

mpicc -show
mpicc --version
mpif90 -show
mpif90 --version


3) try to put the mpi "compiler" at the beginning of the PATH variable :

Code: Select all

export PATH=/usr/lib64/openmpi/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib64/openmpi/lib:$LD_LIBRARY_PATH
./configure --enable-mpi --enable-mpi-io


jmb
------
Jean-Michel Beuken
Computer Scientist

timber
Posts: 6
Joined: Fri Dec 12, 2014 2:13 am

Re: help MPI support on dual CPU server

Post by timber » Sat Dec 13, 2014 4:19 am

checking whether to use C clock for timings... no

==============================================================================
=== Multicore architecture support ===
==============================================================================

checking whether to enable OpenMP support... yes
checking Fortran flags for OpenMP... -fopenmp
checking whether OpenMP's COLLAPSE works... yes
configure: OpenMP support is enabled in Fortran source code only
checking whether to build MPI code... yes
checking whether the C compiler supports MPI... no
checking whether the C++ compiler supports MPI... no
checking whether the Fortran Compiler supports MPI... yes
checking whether MPI is usable... no
configure: error: MPI support is broken - please fix your config parameters and/or MPI installation

[root@c64 abinit-7.10.1]# mpicc --show
gcc -I/usr/include/openmpi-x86_64 -pthread -m64 -L/usr/lib64/openmpi/lib -lmpi
[root@c64 abinit-7.10.1]# mpicc --version
gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

[root@c64 abinit-7.10.1]# mpif90 --show
gfortran -I/usr/include/openmpi-x86_64 -pthread -m64 -I/usr/lib64/openmpi/lib -L/usr/lib64/openmpi/lib -lmpi_f90 -lmpi_f77 -lmpi
[root@c64 abinit-7.10.1]# mpif90 --version
GNU Fortran (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16)
Copyright (C) 2013 Free Software Foundation, Inc.

GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
You may redistribute copies of GNU Fortran
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING

[root@c64 abinit-7.10.1]# export |grep -i path
declare -x LD_LIBRARY_PATH="/usr/lib64/openmpi/lib:"
declare -x MODULEPATH="/usr/share/Modules/modulefiles:/etc/modulefiles"
declare -x PATH="/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/usr/lib64/openmpi/bin"
declare -x WINDOWPATH="1"
Attachments
config.log
(90.87 KiB) Downloaded 273 times

timber
Posts: 6
Joined: Fri Dec 12, 2014 2:13 am

Re: help MPI support on dual CPU server

Post by timber » Sat Dec 13, 2014 4:22 am

the configuration file for centos 64
run under vmware workstation 10

I have tried run another cenos7 , in real mode. The message is the same, corrupt on mpicc
The host OS is windows server 2012, I have to shared it with other colleagues.
Attachments
c64.ac.log
(31.33 KiB) Downloaded 267 times

User avatar
jbeuken
Posts: 365
Joined: Tue Aug 18, 2009 9:24 pm
Contact:

Re: help MPI support on dual CPU server

Post by jbeuken » Sat Dec 13, 2014 10:53 pm

Hi,

I found a "little" error in the c64.ac file

Code: Select all

CC = "/usr/lib64/openmpi/bin/mpicc

instead of

Code: Select all

CC="/usr/lib64/openmpi/bin/mpicc"


I configured a machine under CentOS 7 with openmpi/ffw3 and with a .ac file ( without some fallbacks in the first time ) :

Code: Select all

enable_64bit_flags="yes"
enable_debug="yes"
prefix="/opt/abinit4.10gcc48openMp"
CC="/usr/lib64/openmpi/bin/mpicc"
FC="mpif90"
enable_mpi="yes"
enable_mpi_io="yes"
with_mpi_incs="-I/usr/include/openmpi-x86_64"
with_mpi_libs="-L/usr/lib64/openmpi/lib -lmpi"
with_fft_libs="-L/usr/lib64/ -lfftw3 -lfftw3f"
with_dft_flavor="libxc"
with_trio_flavor="none"
enable_fallbacks="yes"
enable_openmp="yes"

and with a environment variables ( part ) :

Code: Select all

PATH=/usr/lib64/openmpi/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:~/bin
LD_LIBRARY_PATH=/usr/lib64/openmpi/lib:


the "./configure" and "make mj4" work !

jmb
------
Jean-Michel Beuken
Computer Scientist

timber
Posts: 6
Joined: Fri Dec 12, 2014 2:13 am

Re: help MPI support on dual CPU server

Post by timber » Sun Dec 14, 2014 9:03 am

I revised the configuration file. but the script output the same error message.
I think I had use this setting before.

config.log is attached.


Code: Select all

==============================================================================
 === Multicore architecture support                                         ===
 ==============================================================================

checking whether to enable OpenMP support... yes
checking Fortran flags for OpenMP... -fopenmp
checking whether OpenMP's COLLAPSE works... yes
configure: OpenMP support is enabled in Fortran source code only
checking whether to build MPI code... yes
checking whether the C compiler supports MPI... no
checking whether the C++ compiler supports MPI... no
checking whether the Fortran Compiler supports MPI... yes
checking whether MPI is usable... no
configure: error: MPI support is broken - please fix your config parameters and/or MPI installation
[root@c64 abinit-7.10.1]# grep  mpicc config.log
configure:9788: checking for mpicc
configure:9806: found /usr/lib64/openmpi/bin/mpicc
configure:9818: result: /usr/lib64/openmpi/bin/mpicc
configure:9908: result: mpicc
configure:10148: mpicc --version >&5
configure:10159: mpicc -v >&5
configure:10170: mpicc -V >&5
configure:10219: mpicc    conftest.c  >&5
configure:10332: mpicc -o conftest    conftest.c  >&5
configure:10394: mpicc -c   conftest.c >&5
configure:10456: mpicc -c   conftest.c >&5
configure:10489: checking whether mpicc accepts -g
configure:10519: mpicc -c -g  conftest.c >&5
configure:10644: checking for mpicc option to accept ISO C89
configure:10718: mpicc  -c -g -O2  conftest.c >&5
configure:10830: checking dependency style of mpicc
configure:11056: mpicc -E  conftest.c
configure:11094: mpicc -E  conftest.c
configure:11134: result: mpicc -E
configure:11163: mpicc -E  conftest.c
configure:11201: mpicc -E  conftest.c
configure:11541: mpicc -c   conftest.c >&5
configure:11647: mpicc -o conftest    conftest.c  >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11724: mpicc -c   conftest.c >&5
configure:11788: mpicc -c   conftest.c >&5
configure:11845: mpicc -c   conftest.c >&5
configure:11884: mpicc -c   conftest.c >&5
configure:14907: mpicc -o conftest    conftest.c   -L/usr/lib64/openmpi/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../.. -lmpi_f90 -lmpi_f77 -lmpi -lgfortran -lm -lquadmath -lpthread >&5
configure:15122: mpicc -o conftest    conftest.c cfortran_test.o   -L/usr/lib64/openmpi/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../.. -lmpi_f90 -lmpi_f77 -lmpi -lgfortran -lm -lquadmath -lpthread >&5
configure:15122: mpicc -o conftest    conftest.c cfortran_test.o   -L/usr/lib64/openmpi/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../.. -lmpi_f90 -lmpi_f77 -lmpi -lgfortran -lm -lquadmath -lpthread >&5
configure:15211: mpicc -o conftest    conftest.c cfortran_test.o   -L/usr/lib64/openmpi/lib -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.2/../../.. -lmpi_f90 -lmpi_f77 -lmpi -lgfortran -lm -lquadmath -lpthread >&5
configure:15560: mpicc -c  -I/usr/include/python2.7 -I/usr/lib64/python2.7/site-packages/numpy/core/include  conftest.c >&5
configure:15600: mpicc -E -I/usr/include/python2.7 -I/usr/lib64/python2.7/site-packages/numpy/core/include  conftest.c
configure:15718: mpicc -c  -I/usr/include/python2.7 -I/usr/lib64/python2.7/site-packages/numpy/core/include  conftest.c >&5
configure:21237: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21277: mpicc -E      conftest.c
configure:21237: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21277: mpicc -E      conftest.c
configure:21390: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21430: mpicc -E      conftest.c
configure:21390: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21430: mpicc -E      conftest.c
configure:21390: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21430: mpicc -E      conftest.c
configure:21541: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21581: mpicc -E      conftest.c
configure:21693: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21733: mpicc -E      conftest.c
configure:21693: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21733: mpicc -E      conftest.c
configure:21844: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:21884: mpicc -E      conftest.c
configure:21995: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:22035: mpicc -E      conftest.c
configure:22146: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:22186: mpicc -E      conftest.c
configure:22336: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:22445: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:22847: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:23254: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:23661: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:24068: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:24475: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:24882: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:25289: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:25696: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:26103: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:26510: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:26917: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:27324: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:27731: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native         conftest.c  >&5
configure:27865: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:27933: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:27975: mpicc -m64 -c -m64  -O2 -mtune=native -march=native        conftest.c >&5
configure:30260: mpicc -m64 -o conftest -m64  -O2 -mtune=native -march=native              conftest.c       >&5
ac_cv_path_abi_cc_path=/usr/lib64/openmpi/bin/mpicc
ac_cv_prog_CPP='mpicc -E'
ac_cv_prog_ac_ct_CC=mpicc
CC='mpicc -m64'
CPP='mpicc -E'
abi_cc_path='/usr/lib64/openmpi/bin/mpicc'
ac_ct_CC='mpicc'


Attachments
config.log
(91.11 KiB) Downloaded 281 times

timber
Posts: 6
Joined: Fri Dec 12, 2014 2:13 am

Re: help MPI support on dual CPU server

Post by timber » Mon Dec 15, 2014 2:36 am

ok, I just tried another configuration, with the intel compliler 2013

I compile the openmpi with options
./configure CC=icc CXX=icpc FC=ifort --prefix=/opt/openmpi-intel/
after make, I enter make check, the result show all tests passed.

Code: Select all

Contiguous multiple data-type (1*4500)
raw extraction in 0 microsec
>>--------------------------------------------<<
>>--------------------------------------------<<
Vector data-type (450 times 10 double stride 11)
raw extraction in 8 microsec
>>--------------------------------------------<<
>>--------------------------------------------<<
raw extraction in 131 microsec
>>--------------------------------------------<<
>>--------------------------------------------<<
raw extraction in 138 microsec
>>--------------------------------------------<<
>>--------------------------------------------<<
raw extraction in 955 microsec
>>--------------------------------------------<<
>>--------------------------------------------<<
raw extraction in 0 microsec
>>--------------------------------------------<<
PASS: ddt_raw
==================
All 6 tests passed
==================
make[3]: Leaving directory `/root/openmpi-1.8.3/test/datatype'
make[2]: Leaving directory `/root/openmpi-1.8.3/test/datatype'
Making check in util
make[2]: Entering directory `/root/openmpi-1.8.3/test/util'
make  opal_bit_ops opal_path_nfs
make[3]: Entering directory `/root/openmpi-1.8.3/test/util'
make[3]: `opal_bit_ops' is up to date.
make[3]: `opal_path_nfs' is up to date.
make[3]: Leaving directory `/root/openmpi-1.8.3/test/util'
make  check-TESTS
make[3]: Entering directory `/root/openmpi-1.8.3/test/util'
SUPPORT: OMPI Test Passed: opal_bit_ops(): (70 tests)
PASS: opal_bit_ops
SUPPORT: OMPI Test Passed: opal_path_nfs(): (30 tests)
PASS: opal_path_nfs
==================
All 2 tests passed
==================
make[3]: Leaving directory `/root/openmpi-1.8.3/test/util'
make[2]: Leaving directory `/root/openmpi-1.8.3/test/util'
make[2]: Entering directory `/root/openmpi-1.8.3/test'
make[2]: Nothing to be done for `check-am'.
make[2]: Leaving directory `/root/openmpi-1.8.3/test'
make[1]: Leaving directory `/root/openmpi-1.8.3/test'
make[1]: Entering directory `/root/openmpi-1.8.3'
make[1]: Nothing to be done for `check-am'.
make[1]: Leaving directory `/root/openmpi-1.8.3'


then compiler infomation

Code: Select all

[root@c64 abinit-7.10.1]# mpicc --show 
icc -I/opt/openmpi-intel/include -pthread -Wl,-rpath -Wl,/opt/openmpi-intel/lib -Wl,--enable-new-dtags -L/opt/openmpi-intel/lib -lmpi
[root@c64 abinit-7.10.1]# mpiCC --show
icpc -I/opt/openmpi-intel/include -pthread -Wl,-rpath -Wl,/opt/openmpi-intel/lib -Wl,--enable-new-dtags -L/opt/openmpi-intel/lib -lmpi_cxx -lmpi
[root@c64 abinit-7.10.1]# mpiCC --version
icpc (ICC) 14.0.2 20140120
Copyright (C) 1985-2014 Intel Corporation.  All rights reserved.

[root@c64 abinit-7.10.1]# mpicc --version
icc (ICC) 14.0.2 20140120
Copyright (C) 1985-2014 Intel Corporation.  All rights reserved.




the error message is the same, fortran support, but c and cpp are failed.

Code: Select all

checking whether the Fortran compiler accepts the PROTECTED attribute... yes
checking whether the Fortran compiler supports stream IO... yes
checking whether the Fortran compiler accepts cpu_time()... yes
checking whether the Fortran compiler accepts etime()... no
checking whether to use C clock for timings... no

 ==============================================================================
 === Multicore architecture support                                         ===
 ==============================================================================

checking whether to enable OpenMP support... yes
checking Fortran flags for OpenMP... -openmp
checking whether OpenMP's COLLAPSE works... yes
configure: OpenMP support is enabled in Fortran source code only
checking whether to build MPI code... yes
checking whether the C compiler supports MPI... no
checking whether the C++ compiler supports MPI... no
checking whether the Fortran Compiler supports MPI... yes
checking whether MPI is usable... no
configure: error: MPI support is broken - please fix your config parameters and/or MPI installation


config.log is attached
Attachments
config-icc.log
(121.52 KiB) Downloaded 263 times

timber
Posts: 6
Joined: Fri Dec 12, 2014 2:13 am

Re: help MPI support on dual CPU server

Post by timber » Mon Dec 15, 2014 2:39 am

attachment is the config.log of openompi with intel compiler

SO the openmpi works.

but why the detection in abinit failed


Code: Select all

[root@c64 openmpi-1.8.3]# ldconfig  --print  |grep -i mpi  
   libvt.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libvt.so.0
   libvt.so (libc6,x86-64) => /opt/openmpi-intel/lib/libvt.so
   libvt-mt.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-mt.so.0
   libvt-mt.so (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-mt.so
   libvt-mpi.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-mpi.so.0
   libvt-mpi.so (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-mpi.so
   libvt-mpi-unify.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-mpi-unify.so.0
   libvt-mpi-unify.so (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-mpi-unify.so
   libvt-hyb.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-hyb.so.0
   libvt-hyb.so (libc6,x86-64) => /opt/openmpi-intel/lib/libvt-hyb.so
   libotfaux.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libotfaux.so.0
   libotfaux.so (libc6,x86-64) => /opt/openmpi-intel/lib/libotfaux.so
   liboshmem.so.3 (libc6,x86-64) => /opt/openmpi-intel/lib/liboshmem.so.3
   liboshmem.so (libc6,x86-64) => /opt/openmpi-intel/lib/liboshmem.so
   libopen-trace-format.so.1 (libc6,x86-64) => /opt/openmpi-intel/lib/libopen-trace-format.so.1
   libopen-trace-format.so (libc6,x86-64) => /opt/openmpi-intel/lib/libopen-trace-format.so
   libopen-rte.so.7 (libc6,x86-64) => /opt/openmpi-intel/lib/libopen-rte.so.7
   libopen-rte.so (libc6,x86-64) => /opt/openmpi-intel/lib/libopen-rte.so
   libopen-pal.so.6 (libc6,x86-64) => /opt/openmpi-intel/lib/libopen-pal.so.6
   libopen-pal.so (libc6,x86-64) => /opt/openmpi-intel/lib/libopen-pal.so
   libompitrace.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libompitrace.so.0
   libompitrace.so (libc6,x86-64) => /opt/openmpi-intel/lib/libompitrace.so
   libmpi_usempif08.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_usempif08.so.0
   libmpi_usempif08.so (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_usempif08.so
   libmpi_usempi_ignore_tkr.so.0 (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_usempi_ignore_tkr.so.0
   libmpi_usempi_ignore_tkr.so (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_usempi_ignore_tkr.so
   libmpi_mpifh.so.2 (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_mpifh.so.2
   libmpi_mpifh.so (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_mpifh.so
   libmpi_cxx.so.1 (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_cxx.so.1
   libmpi_cxx.so (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi_cxx.so
   libmpi.so.1 (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi.so.1
   libmpi.so (libc6,x86-64) => /opt/openmpi-intel/lib/libmpi.so
   libmca_common_sm.so.4 (libc6,x86-64) => /opt/openmpi-intel/lib/libmca_common_sm.so.4
   libmca_common_sm.so (libc6,x86-64) => /opt/openmpi-intel/lib/libmca_common_sm.so
   libexempi.so.3 (libc6,x86-64) => /lib64/libexempi.so.3
Attachments
openmpi-config.log
(54.97 KiB) Downloaded 274 times

User avatar
jbeuken
Posts: 365
Joined: Tue Aug 18, 2009 9:24 pm
Contact:

Re: help MPI support on dual CPU server

Post by jbeuken » Mon Dec 15, 2014 8:16 pm

very difficult to understand :?
can you send me the result of : env
jmb
------
Jean-Michel Beuken
Computer Scientist

Locked