sharing .ac config and install files on supercomputers

HPC, IBM, Mac OS, Windows, ...

Moderators: jbeuken, gmatteo, Jordan

Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit builds.
For a video explanation on how to build Abinit for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green button on its upper-right corner to accept it.

sharing .ac config and install files on supercomputers

Postby snow » Fri Sep 30, 2016 11:33 pm

Does anyone have compile experience on any XSEDE supercomputers, and would be willing to share config files etc? I'm about to embark on installation on various computers, e.g. Stampede, Maverick, Jetstream, Comet, Gordon, OSG, Bridges, etc. But every computer will have its own challenges, so it seems like it would be sweet if the abinit community shared config/install files and notes on each of these, as I assume many users of those computers also use abinit. When I get working executables, I'd be willing to share too if there is interest.

For now the first computer I'm looking at is Stampede, if anyone has suggestions as to mpi vs openmp for starters and anything else that may be useful. Here's a little info:
https://portal.tacc.utexas.edu/user-guides/stampede

On Stampede nodes, MPI applications can be launched solely on the E5 processors, or solely on the Phi coprocessors, or on both in a "symmetric" heterogeneous computing mode. For heterogeneous computing, an application is compiled for each architecture and the MPI launcher ("ibrun" at TACC) is modified to launch the executables on the appropriate processors according to the resource specification for each platform (number of tasks on the E5 component and the Phi component of a node).

So, to use both the E5's and the Phi coprocessors, I will need to compile abinit on each separately, and somehow get ibrun to make them both work properly...

So the Phi can run either MPI or openMP, but perhaps I should do something like offload a sharedmem part of the application to the co? Because, hey, 61 cores w/ 8GB. Well I guess you
could still do MPI, but maybe openmp would be better.

abinit seems to expect MPI... openmp is still possible right?

Any thoughts, suggestions, experience? Much thanks!

-Ryan
snow
 
Posts: 7
Joined: Fri Aug 07, 2015 6:53 pm

stuck at libxc

Postby snow » Wed Oct 12, 2016 1:56 am

I'm stuck at the libxc compile, but here's what I have for now...

modules I have loaded on Stampede: 1) intel/15.0.2 2) mvapich2/2.1 3) xalt/0.6 4) TACC 5) fftw3/3.3.4 6) gsl/1.16 (m)

I would like to install abinit with atompaw/libxc, but I'm stuck at compiling libxc properly. Actually I don't notice the failure until I get to the abinit and atompaw compiles, but first things first...

Install libxc:
1. download
2. ./configure CC=mpicc CFLAGS=-xhost CXX=mpicxx CXXFLAGS=-xhost FC=mpif90 FCFLAGS=-xhost > config.out 2> config.err &
3. make > make.out 2> make.err &

Everything seems to be fine. However both atompaw and abinit test libxc (and fail) with a line like the following:

mpif90 -xhost -o conftest -g -I/home1/03283/snowman/work/build/libxc/libxcmpif90/src -I/home1/03283/snowman/work/build/libxc/libxcmpif90 -O2 conftest.F90 -L/home1/03283/snowman/work/build/libxc/libxcmpif90/src/.libs -I/opt/apps/intel/15/composer_xe_2015.2.164/mkl/include -mkl=sequential -lxc

where conftest.F90 is this:
program main

use xc_f90_types_m
use xc_f90_lib_m
implicit none
TYPE(xc_f90_pointer_t) :: xc_func
TYPE(xc_f90_pointer_t) :: xc_info
integer :: func_id = 1
call xc_f90_func_init(xc_func, xc_info, func_id, XC_UNPOLARIZED)
call xc_f90_func_end(xc_func)

end


which when I attempt to compile I get the error:
/tmp/ifortwruWXi.o: In function `main':
/work/03283/snowman/build/libxc/conftest.F90:9: undefined reference to `xc_f90_func_init_'
/work/03283/snowman/build/libxc/conftest.F90:10: undefined reference to `xc_f90_func_end_'
/usr/bin/ld: link errors found, deleting executable `conftest'


I've also tried compiling with gfortran and ifort and gotten identical results. I've also tried linking with -lxcf90 and with -lxcf03. And -lxcf90 gives a bunch more undefined reference errors.

Any thoughts? I know I should maybe bring it to the libxc peeps, but thought I'd ask here since it's a failure that comes up in the abinit compile... Thanks much!

-Ryan
snow
 
Posts: 7
Joined: Fri Aug 07, 2015 6:53 pm

Re: sharing .ac config and install files on supercomputers

Postby Jordan » Thu Oct 13, 2016 8:36 am

Hi,

You should indeed compile with -lxcf90 -lxc flags together and use -L/path/to/libxc/libraries
During the abinit configuration step this is set using
Code: Select all
--with-libxc-incs="-I/path/to/libxc/include"
--with-libxc-libs="-L/path/to/libxc/libraries -lxcf90 -lxc"


Cheers
Jordan
 
Posts: 281
Joined: Tue May 07, 2013 9:47 am

Re: sharing .ac config and install files on supercomputers

Postby snow » Thu Oct 13, 2016 7:19 pm

Hi Jordan, thanks for the reply. I'll see if I can get that. For now I've realized that you don't actually need atompaw+libxc to use PAWs.

So I've got a working compile now, MPI version.

abinit compile notes
abinit's own install notes: http://www.abinit.org/doc/helpfiles/for ... stall.html
see also ~abinit/doc/INSTALL

Load Stampede modules: 1) intel/15.0.2 2) mvapich2/2.1 3) xalt/0.6 4) TACC 5) fftw3/3.3.4

** I tried gsl but it failed on me, so I'm leaving it out...

Open a file, call it e.g. stampede.ac: (for a more complete template, use ~abinit/doc/config/build-config.ac)
CC="mpicc"
CFLAGS_EXTRA="-xhost"
CXX="mpicxx"
CXXFLAGS_EXTRA="-xhost"
FC="mpif90"
F77="mpif77"
FCFLAGS_EXTRA="-xhost"
enable_stdin="no"
enable_mpi="yes"
enable_mpi_io="yes"
with_mpi_incs="-I/opt/apps/intel15/mvapich2/2.1/include"
with_mpi_libs="-L/opt/apps/intel15/mvapich2/2.1/lib -lmpi"
with_fft_flavor="fftw3"
with_fft_incs="-I/opt/apps/intel15/mvapich2_2_1/fftw3/3.3.4/include"
with_fft_libs="-L/opt/apps/intel15/mvapich2_2_1/fftw3/3.3.4/lib -lfftw3"
with_linalg_flavor="mkl"


Make a new 'build' directory in the abinit home dir, move the stampede.ac file there, go there and do:
../configure --with-config-file=stampede.ac
(save the output by adding: >config.out 2>config.err &)
Check the config.log for details. If the config works, go ahead and make:
make multi multi_nprocs=16
(save output: >make.out 2>make.err &)

And that should get you a good MPI abinit executable for use on Stampede. See the ~/abinit/src/98_main/abinit for the executable.

Also, on Stampede use ibrun instead of mpirun.

If anyone gets the MIC up and running, please post!
snow
 
Posts: 7
Joined: Fri Aug 07, 2015 6:53 pm

Re: sharing .ac config and install files on supercomputers

Postby gmatteo » Fri Oct 28, 2016 10:10 pm

Hi Ryan,

I'm about to embark on installation on various computers, e.g. Stampede, Maverick, Jetstream, Comet, Gordon, OSG, Bridges, etc. But every computer will have its own challenges, so it seems like it would be sweet if the abinit community shared config/install files and notes on each of these, as I assume many users of those computers also use abinit. When I get working executables, I'd be willing to share too if there is interest.


This is an excellent idea!
I've created a repo on github to share configuration files for Abinit (https://github.com/abinit/abiconfig).
Each ac file contains an initial section with metadata e.g. the modules that must be loaded before running configure.
There's also a python script that can be used to find the ac files associated to a particular machine or select the configuration files
containing a particular set of keywords.
You may want to contribute your config files to abiconfig.

with_fft_flavor="fftw3"
with_fft_incs="-I/opt/apps/intel15/mvapich2_2_1/fftw3/3.3.4/include"
with_fft_libs="-L/opt/apps/intel15/mvapich2_2_1/fftw3/3.3.4/lib -lfftw3"


Note that abinit requires the double precision library (libfftw3) as well as the single precision version (libfftw3f)
So I would use:

Code: Select all
with_fft_libs="-L${FFTW_LIB} -lfftw3 -lfftw3f"


I usually prefer the FFTW3 wrappers provided by MKL, especially when I'm already using MKL for BLAS/LAPACK:
These are the options I use to compile with intel ifort and to link dinamically with MKL:

Code: Select all
# BLAS/LAPACK provided by MKL (dynamic linking)
# See https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
with_linalg_flavor="mkl"
with_linalg_incs='-I$(MKLROOT)/include'
with_linalg_libs="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"

# FFTW3 interface provided by MKL (dynamic linking)
with_fft_flavor="fftw3"
with_fft_incs='-I$(MKLROOT)/include'
with_fft_libs="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"


Matteo
User avatar
gmatteo
 
Posts: 251
Joined: Sun Aug 16, 2009 5:40 pm


Return to Platform specific questions

Who is online

Users browsing this forum: No registered users and 1 guest