Abinit 863 parallel: test_v1 error  [SOLVED]

option, parallelism,...

Moderators: fgoudreault, mcote

Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Locked
hgibhar
Posts: 4
Joined: Mon Apr 16, 2018 4:10 pm

Abinit 863 parallel: test_v1 error

Post by hgibhar » Mon Apr 16, 2018 4:27 pm

Dear all,

I am trying to install the version 8.6.3 on our university cluster. Compiling abinit as sequential program seems to run correct. However, if I compile with

Code: Select all

#destination of executables
prefix="/usr/users/hgibhar/abinit-8.6.3.parallel/myabinitparallel"

enable_mpi = 'yes'
enable_openmp = 'yes'
#enable_openmp = 'no'
enable_mpi_io = 'yes'
#enable_mpi_io = 'no'
FC=mpiifort
F77=mpiifort
F90=mpiifort
CC=mpiicc
CXX=mpiicpc

CFLAGS_EXTRA="-mt_mpi" CC_LDFLAGS_EXTRA="-mt_mpi" FCFLAGS_EXTRA="-mt_mpi" FC_LDFLAGS_EXTRA="-mt_mpi"

with_fc_vendor='intel'
#with_mpi_prefix='/usr'
with_trio_flavor='netcdf'
#with_netcdf_incs='-I/usr/include'
#with_netcdf_libs='-L/usr/lib -lnetcdf -lnetcdff'
#with_fft_flavor='fftw3'
#with_fft_incs='-I/usr/include/'
#with_fft_libs='-L/usr/lib/x86-64-linux-gnu/ -lfftw3 -lfftw3f'
with_linalg_flavor='mkl'
#with_linalg_libs='-L/usr/lib64 -llapack -lf77blas -lcblas -latlas'
#with_dft_flavor='atompaw+libxc'
with_dft_flavor='atompaw+bigdft+libxc+wannier90'
enable_gw_dpc='yes'
enable_maintainer_checks='no'


then compilation finishes with no error messages (all Intel compilers are automatically set to versions 17.0) . If I make the the standard tests: make check , then all 11 tests are given as failed. Even, make test_v1 will not run. If I copy all the required files for test_v1 in a user-directory and try to run abinit with those files, I can catch the stdout and error messages.
Here is my cluster script:

Code: Select all

#!/bin/sh
##parallel job
#BSUB -q mpi-short
#BSUB -n 8
#BSUB -x
#BSUB -R same[model]
#BSUB -R span[ptile='!']
#BSUB -W 02:00
#BSUB -a intelmpi
#BSUB -J test_v1_parallel_bsub
#BSUB -o test_v1_parallel_bsub.%J.out
#BSUB -e test_v1_parallel_bsub.%J.err

module load intel/compiler intel/mkl intel/mpi
cp $HOME/abinit-8.6.3.parallel/myabinitparallel/src/98_main/abinit abinit

mpirun.lsf ./abinit < testin_v1.files > testin_v1_bsub.${LSB_JOBID}.log

rm -f abinit *DDB *EIG *nc *WFK


Now, I give the error part of the output:

Code: Select all

Please give name of formatted atomic psp file
 iofn2 : for atom type 1, psp file is /home/uni08/hgibhar/abinit-8.6.3.parallel/tests/Psps_for_tests/70yb.pspnc
  read the values zionpsp= 16.0 , pspcod=   1 , lmax=   3
 
 inpspheads: deduce mpsang = 4, n1xccc = 2501.
 invars1 : treat image number: 1
 
 symlatt : the Bravais lattice is cF (face-centered cubic)
  xred   is defined in input file
 ingeo : takes atomic coordinates from input array xred
 
 symlatt : the Bravais lattice is cF (face-centered cubic)
 
 symlatt : the Bravais lattice is cF (face-centered cubic)
 
 symspgr : problem with isym=  1
  symrelconv(:,1,isym)=   1   0   0
  symrelconv(:,2,isym)=   0   1   0
  symrelconv(:,3,isym)=   0   0   1
  tnonsconv(:,isym)=   -1.000000000000E-06   -1.000000000000E-06    2.147483648000E+09
  trialt(:)=    0.000000000000E+00    0.000000000000E+00    0.000000000000E+00
 
--- !BUG
src_file: symspgr.F90
src_line: 215
mpi_rank: 0
message: |
    The space symmetry operation number   1
    is not a (translated) root of unity
...
 
 
 leave_new: decision taken to exit ...


I do not know any solution further; I tried different compilations, different options in my ac-file. What can I do? (Btw, it is impossible to do the actual calculations, which are my aim - this is the reason, I start to install a new version). Does anybody know a procedure to overcome this problem?

Best regards
Holger

hgibhar
Posts: 4
Joined: Mon Apr 16, 2018 4:10 pm

Re: Abinit 863 parallel: test_v1 error

Post by hgibhar » Wed Apr 18, 2018 9:13 am

Dear all,
a further comment on my problem:

if I start the calculation for make test_v1 from a user directory with the parallel compiled executable (Intel2017) in a sequential manner:

Code: Select all

#!/bin/sh

##sequential job

module load intel/compiler intel/mkl intel/mpi
cp $HOME/abinit-8.6.3.parallel/myabinitparallel/src/98_main/abinit abinit

./abinit < testin_v1sequ.files > testin_v1_sequ.log

rm -f abinit *DDB *EIG *nc *WFK


everything seem to work alright. The output is as required and as calculated from the automatic test. Hence, I suppose, that the problem must ly somewhere in the mpi system.

I would appreciate some help very much!

Best regards
Holger

ebousquet
Posts: 469
Joined: Tue Apr 19, 2011 11:13 am
Location: University of Liege, Belgium

Re: Abinit 863 parallel: test_v1 error

Post by ebousquet » Wed Apr 18, 2018 9:39 pm

Dear Holger,
You might find some informations regarding intel17 compilation flags for parallel case in the following post:
viewtopic.php?f=3&t=3801
Check if you have tested all the options proposed here.
Hope this can solve your problem,
Eric

hgibhar
Posts: 4
Joined: Mon Apr 16, 2018 4:10 pm

Re: Abinit 863 parallel: test_v1 error

Post by hgibhar » Thu Apr 19, 2018 3:43 pm

Dear Eric,
thank you very much for your hint.
Now, I tried the flags from the other thread. However, with the options

Code: Select all

-axCORE-AVX2 -xavx


I got error during the configure - procedure (crosscompiling etc.). Hence, I removed those flags and configured with the following options:

Code: Select all

#with suggestions of ebousquet (ABINITforum)
#destination of executables
prefix="/usr/users/hgibhar/abinit-8.6.3.parallel/myabinitparallel"

enable_mpi = 'yes'
#enable_openmp = 'yes'
#enable_openmp = 'no'
enable_mpi_io = 'yes'
#enable_mpi_io = 'no'
FC=mpiifort
F77=mpiifort
F90=mpiifort
CC=mpiicc
CXX=mpiicpc

#ebousquet:
#-axCORE-AVX2 -xavx: error multiplatform!

FCFLAGS="-O2 -mkl -fp-model precise"
FFLAGS="-O2 -mkl -fp-model precise"
CFLAGS="-O2 -mkl -fp-model precise"
CXXFLAGS="-O2 -mkl -fp-model precise"

enable_mpi_inplace='yes'
enable_zdot_bugfix='yes'
enable_avx_safe_mode='yes'
enable_fallbacks='yes'
#################

with_fc_vendor='intel'
#with_mpi_prefix='/usr'
with_trio_flavor='netcdf'
#with_netcdf_incs='-I/usr/include'
#with_netcdf_libs='-L/usr/lib -lnetcdf -lnetcdff'
#with_fft_flavor='fftw3'
#with_fft_incs='-I/usr/include/'
#with_fft_libs='-L/usr/lib/x86-64-linux-gnu/ -lfftw3 -lfftw3f'
with_linalg_flavor='mkl'
#with_linalg_libs='-L/usr/lib64 -llapack -lf77blas -lcblas -latlas'
#with_dft_flavor='atompaw+libxc'
with_dft_flavor='atompaw+bigdft+libxc+wannier90'
enable_gw_dpc='yes'
enable_maintainer_checks='no'


Configuring and compilation worked. After this, I started the same test_v1. Now, the error from my first post has vanished. The structural analysis seems to work (see my logfile)
test_v1_parallel.9102322.out
Logfile for test_v1
(1.8 KiB) Downloaded 380 times

The further calculation seems to be erroneous, which can also be seen at the end of the log-file. I get the following error message from my mpi-system:

Code: Select all

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
libirc.so          00002AAAB23E58B1  tbk_trace_stack_i     Unknown  Unknown
libirc.so          00002AAAB23E39EB  tbk_string_stack_     Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D5F2  Unknown               Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D446  tbk_stack_trace       Unknown  Unknown
libifcoremt.so.5   00002AAAB09860D0  for__issue_diagno     Unknown  Unknown
libifcoremt.so.5   00002AAAB0997EB8  for__signal_handl     Unknown  Unknown
libpthread-2.17.s  00002AAAB050A5E0  Unknown               Unknown  Unknown
abinit             000000000172C09D  Unknown               Unknown  Unknown
abinit             00000000010A77EA  Unknown               Unknown  Unknown
abinit             00000000005E8234  Unknown               Unknown  Unknown
abinit             00000000005CC961  Unknown               Unknown  Unknown
abinit             000000000056FC9D  Unknown               Unknown  Unknown
abinit             000000000043CEA2  Unknown               Unknown  Unknown
abinit             0000000000415466  Unknown               Unknown  Unknown
abinit             000000000040A93F  Unknown               Unknown  Unknown
abinit             0000000000408C1E  Unknown               Unknown  Unknown
libc-2.17.so       00002AAAB265AC05  __libc_start_main     Unknown  Unknown
abinit             0000000000408B29  Unknown               Unknown  Unknown
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
libirc.so          00002AAAB23E58B1  tbk_trace_stack_i     Unknown  Unknown
libirc.so          00002AAAB23E39EB  tbk_string_stack_     Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D5F2  Unknown               Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D446  tbk_stack_trace       Unknown  Unknown
libifcoremt.so.5   00002AAAB09860D0  for__issue_diagno     Unknown  Unknown
libifcoremt.so.5   00002AAAB0997EB8  for__signal_handl     Unknown  Unknown
libpthread-2.17.s  00002AAAB050A5E0  Unknown               Unknown  Unknown
abinit             000000000172C09D  Unknown               Unknown  Unknown
abinit             00000000010A77EA  Unknown               Unknown  Unknown
abinit             00000000005E8234  Unknown               Unknown  Unknown
abinit             00000000005CC961  Unknown               Unknown  Unknown
abinit             000000000056FC9D  Unknown               Unknown  Unknown
abinit             000000000043CEA2  Unknown               Unknown  Unknown
abinit             0000000000415466  Unknown               Unknown  Unknown
abinit             000000000040A93F  Unknown               Unknown  Unknown
abinit             0000000000408C1E  Unknown               Unknown  Unknown
libc-2.17.so       00002AAAB265AC05  __libc_start_main     Unknown  Unknown
abinit             0000000000408B29  Unknown               Unknown  Unknown
forrtl: error (69): process interrupted (SIGINT)
Image              PC                Routine            Line        Source             
libirc.so          00002AAAB23E58B1  tbk_trace_stack_i     Unknown  Unknown
libirc.so          00002AAAB23E39EB  tbk_string_stack_     Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D5F2  Unknown               Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D446  tbk_stack_trace       Unknown  Unknown
libifcoremt.so.5   00002AAAB09860D0  for__issue_diagno     Unknown  Unknown
libifcoremt.so.5   00002AAAB0998158  for__signal_handl     Unknown  Unknown
libpthread-2.17.s  00002AAAB050A5E0  Unknown               Unknown  Unknown
libmpi.so.12       00002AAAAF733A60  PMPIDI_CH3I_Progr     Unknown  Unknown
libmpi.so.12.0     00002AAAAFB67672  Unknown               Unknown  Unknown
libmpi.so.12.0     00002AAAAF8CAF50  Unknown               Unknown  Unknown
libmpi.so.12.0     00002AAAAF6E54D5  Unknown               Unknown  Unknown
libmpi.so.12       00002AAAAF6E79EF  MPI_Alltoall          Unknown  Unknown
libmpifort.so.12.  00002AAAAF29542E  mpi_alltoall__        Unknown  Unknown
abinit             00000000021C7C19  Unknown               Unknown  Unknown
abinit             000000000186F903  Unknown               Unknown  Unknown
abinit             000000000186B23A  Unknown               Unknown  Unknown
abinit             0000000001803345  Unknown               Unknown  Unknown
abinit             00000000017DA377  Unknown               Unknown  Unknown
abinit             000000000170FE15  Unknown               Unknown  Unknown
abinit             0000000001733FE2  Unknown               Unknown  Unknown
abinit             00000000010A3264  Unknown               Unknown  Unknown
abinit             00000000005E8234  Unknown               Unknown  Unknown
abinit             00000000005CC961  Unknown               Unknown  Unknown
abinit             000000000056FC9D  Unknown               Unknown  Unknown
abinit             000000000043CEA2  Unknown               Unknown  Unknown
abinit             0000000000415466  Unknown               Unknown  Unknown
abinit             000000000040A93F  Unknown               Unknown  Unknown
abinit             0000000000408C1E  Unknown               Unknown  Unknown
libc-2.17.so       00002AAAB265AC05  __libc_start_main     Unknown  Unknown
abinit             0000000000408B29  Unknown               Unknown  Unknown
forrtl: error (69): process interrupted (SIGINT)
Image              PC                Routine            Line        Source             
libirc.so          00002AAAB23E58B1  tbk_trace_stack_i     Unknown  Unknown
libirc.so          00002AAAB23E39EB  tbk_string_stack_     Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D5F2  Unknown               Unknown  Unknown
libifcoremt.so.5   00002AAAB0A2D446  tbk_stack_trace       Unknown  Unknown
libifcoremt.so.5   00002AAAB09860D0  for__issue_diagno     Unknown  Unknown
libifcoremt.so.5   00002AAAB0998158  for__signal_handl     Unknown  Unknown
libpthread-2.17.s  00002AAAB050A5E0  Unknown               Unknown  Unknown
libmpi.so.12       00002AAAAF7336A6  PMPIDI_CH3I_Progr     Unknown  Unknown
libmpi.so.12.0     00002AAAAFB67672  Unknown               Unknown  Unknown
libmpi.so.12.0     00002AAAAF8CAF50  Unknown               Unknown  Unknown
libmpi.so.12.0     00002AAAAF6E54D5  Unknown               Unknown  Unknown
libmpi.so.12       00002AAAAF6E79EF  MPI_Alltoall          Unknown  Unknown
libmpifort.so.12.  00002AAAAF29542E  mpi_alltoall__        Unknown  Unknown
abinit             00000000021C7C19  Unknown               Unknown  Unknown
abinit             000000000186F903  Unknown               Unknown  Unknown
abinit             000000000186B23A  Unknown               Unknown  Unknown
abinit             0000000001803345  Unknown               Unknown  Unknown
abinit             00000000017DA377  Unknown               Unknown  Unknown
abinit             000000000170FE15  Unknown               Unknown  Unknown
abinit             0000000001733FE2  Unknown               Unknown  Unknown
abinit             00000000010A3264  Unknown               Unknown  Unknown
abinit             00000000005E8234  Unknown               Unknown  Unknown
abinit             00000000005CC961  Unknown               Unknown  Unknown
abinit             000000000056FC9D  Unknown               Unknown  Unknown
abinit             000000000043CEA2  Unknown               Unknown  Unknown
abinit             0000000000415466  Unknown               Unknown  Unknown
abinit             000000000040A93F  Unknown               Unknown  Unknown
abinit             0000000000408C1E  Unknown               Unknown  Unknown
libc-2.17.so       00002AAAB265AC05  __libc_start_main     Unknown  Unknown
abinit             0000000000408B29  Unknown               Unknown  Unknown


So, it seems, that I could improve only slightly.

Maybe, someone has further suggestions.

Holger

ebousquet
Posts: 469
Joined: Tue Apr 19, 2011 11:13 am
Location: University of Liege, Belgium

Re: Abinit 863 parallel: test_v1 error

Post by ebousquet » Mon Apr 23, 2018 2:40 pm

Dear Holger,
hgibhar wrote:The further calculation seems to be erroneous, which can also be seen at the end of the log-file. I get the following error message from my mpi-system:

What do you mean by "the further calculation"? Is it another automatic test that is failing?
All the best,
Eric

hgibhar
Posts: 4
Joined: Mon Apr 16, 2018 4:10 pm

Re: Abinit 863 parallel: test_v1 error  [SOLVED]

Post by hgibhar » Mon Apr 23, 2018 5:42 pm

Dear Eric,

with further calculation I meant, that it finds the correct symmetry, but that it cannot calculate the energy. E.G. from the end of my logfile

Code: Select all

   Number of q-points for radial functions ffspl ..   3001
   Number of q-points for vlspl ...................   3001
   vloc is computed in Reciprocal Space
   model core charge treated in real-space
 
  XC functional for type 1 is 1
  Pseudo valence available: no
 
 wfconv:     8 bands initialized randomly with npw=    62, for ikpt=     1
_setup2: Arith. and geom. avg. npw (full set) are      62.750      62.748
 initro: for itypat=  1, take decay length=      0.8000,
 initro: indeed, coreel=     54.0000, nval= 16 and densty=  0.0000E+00.
 
================================================================================
 
 getcut: wavevector=  0.0000  0.0000  0.0000  ngfft=  20  20  20
         ecut(hartree)=      8.000   => boxcut(ratio)=   2.22144
 
 getcut : COMMENT -
  Note that boxcut > 2.2 ; recall that boxcut=Gcut(box)/Gcut(sphere) = 2
  is sufficient for exact treatment of convolution.
  Such a large boxcut is a waste : you could raise ecut
  e.g. ecut=    9.869604 Hartrees makes boxcut=2
 
Job  /opt/lsf/10.1/linux2.6-glibc2.3-x86_64/bin/intelmpi_wrapper ./abinit

TID   HOST_NAME   COMMAND_LINE            STATUS            TERMINATION_TIME
===== ========== ================  =======================  ===================
00000 gwdc061    ./abinit          Exit (174)               04/20/2018 16:34:58
00001 gwdc061    ./abinit          Exit (status unknown)                       
00002 gwdc061    ./abinit          Exit (status unknown)                       
00003 gwdc061    ./abinit          Exit (status unknown)                       
00004 gwdc061    ./abinit          Exit (status unknown)                       
00005 gwdc061    ./abinit          Exit (status unknown)                       
00006 gwdc061    ./abinit          Exit (status unknown)                       
00007 gwdc061    ./abinit          Exit (174)               04/20/2018 16:34:58


However, it seems that my problem now could be solved together with the help of our cluster admins.

I give now my configuration file:

Code: Select all

#destination of executables
prefix="/usr/users/hgibhar/abinit-8.6.3.parallel/myabinitparallel"

FC=mpiifort
F77=mpiifort
F90=mpiifort
CC=mpiicc
CXX=mpiicpc

FCFLAGS="-O2 -mkl -fp-model precise"
FFLAGS="-O2 -mkl -fp-model precise"
CFLAGS="-O2 -mkl -fp-model precise"
CXXFLAGS="-O2 -mkl -fp-model precise"

enable_mpi = 'yes'
enable_mpi_inplace='yes'
enable_zdot_bugfix='yes'
enable_avx_safe_mode='yes'
enable_fallbacks='yes'

#################
CPPFLAGS="-I${MKLROOT}/include -I${MKLROOT}/include/fftw"
LDFLAGS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm"
##############

with_trio_flavor='netcdf'
with_linalg_flavor='mkl'
with_dft_flavor='atompaw+bigdft+libxc'
enable_gw_dpc='yes'
enable_maintainer_checks='no'


With this file, the modules

Code: Select all

module load intel/compiler/64
module load intel/mpi/64
module load intel/mkl/64


but without

Code: Select all

enable_mpi_io = 'yes' 

now, I can run the example test_v1 also as parallel run with mpi and it calculates the same as with a sequential run.

The end of the logfile is

Code: Select all

  XC functional for type 1 is 1
  Pseudo valence available: no
 
 wfconv:     8 bands initialized randomly with npw=    16, for ikpt=     1
 wfconv:     8 bands initialized randomly with npw=    16, for ikpt=     2
_setup2: Arith. and geom. avg. npw (full set) are      16.000      16.000
 initro: for itypat=  1, take decay length=      0.8000,
 initro: indeed, coreel=     54.0000, nval= 16 and densty=  0.0000E+00.
 
================================================================================
 
 getcut: wavevector=  0.0000  0.0000  0.0000  ngfft=  20  20  20
         ecut(hartree)=      8.000   => boxcut(ratio)=   2.22144
 
 getcut : COMMENT -
  Note that boxcut > 2.2 ; recall that boxcut=Gcut(box)/Gcut(sphere) = 2
  is sufficient for exact treatment of convolution.
  Such a large boxcut is a waste : you could raise ecut
  e.g. ecut=    9.869604 Hartrees makes boxcut=2
 
 
 ITER STEP NUMBER     1
 vtorho : nnsclo_now=2, note that nnsclo,dbl_nnsclo,istep=0 0 1
- Will use non-blocking ialltoall for MPI-FFT
 
--- !WARNING
src_file: vtorho.F90
src_line: 1585
message: |
    For k-point number 1,
    The minimal occupation factor is  2.000.
    An adequate monitoring of convergence requires it to be  at most 0.01_dp.
    Action: increase slightly the number of bands.
...
 
 Total charge density [el/Bohr^3]
      Maximum=    4.5633E-01  at reduced coord.    0.0500    0.8500    0.2000
      Minimum=    5.8762E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  1  -69.636426934385    -6.964E+01 1.025E-04 1.158E+01
 scprqt: <Vxc>= -3.7631157E-01 hartree
 
Simple mixing update:
  residual square of the potential :   5.39325405079883
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     2
 vtorho : nnsclo_now=2, note that nnsclo,dbl_nnsclo,istep=0 0 2
 
--- !WARNING
src_file: vtorho.F90
src_line: 1585
message: |
    For k-point number 1,
    The minimal occupation factor is  2.000.
    An adequate monitoring of convergence requires it to be  at most 0.01_dp.
    Action: increase slightly the number of bands.
...
 
 Total charge density [el/Bohr^3]
      Maximum=    4.8571E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8067E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  2  -69.657561749079    -2.113E-02 4.227E-04 2.727E-01
 scprqt: <Vxc>= -3.7516948E-01 hartree
 
 Pulay update with  1 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=  0.994      0.572E-02
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     3
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 3
 
--- !WARNING
src_file: vtorho.F90
src_line: 1585
message: |
    For k-point number 1,
    The minimal occupation factor is  2.000.
    An adequate monitoring of convergence requires it to be  at most 0.01_dp.
    Action: increase slightly the number of bands.
...
 
 Total charge density [el/Bohr^3]
      Maximum=    4.9082E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.9121E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  3  -69.662710097185    -5.148E-03 2.953E-04 1.753E-01
 scprqt: <Vxc>= -3.7710077E-01 hartree
 
 Pulay update with  2 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=  0.685      0.349     -0.345E-01
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     4
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 4
 
--- !WARNING
src_file: vtorho.F90
src_line: 1585
message: |
    For k-point number 1,
    The minimal occupation factor is  2.000.
    An adequate monitoring of convergence requires it to be  at most 0.01_dp.
    Action: increase slightly the number of bands.
...
 
 Total charge density [el/Bohr^3]
      Maximum=    4.9236E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8827E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  4  -69.663968164086    -1.258E-03 1.333E-05 9.465E-03
 scprqt: <Vxc>= -3.7667317E-01 hartree
 
 Pulay update with  3 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=   1.46     -0.412     -0.576E-01  0.886E-02
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     5
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 5
 
--- !WARNING
src_file: vtorho.F90
src_line: 1585
message: |
    For k-point number 1,
    The minimal occupation factor is  2.000.
    An adequate monitoring of convergence requires it to be  at most 0.01_dp.
    Action: increase slightly the number of bands.
...
 
 Total charge density [el/Bohr^3]
      Maximum=    4.9240E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8691E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  5  -69.664154395287    -1.862E-04 1.148E-05 9.470E-04
 scprqt: <Vxc>= -3.7664246E-01 hartree
 
 Pulay update with  4 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=   1.07     -0.618E-01 -0.313E-01  0.249E-01 -0.131E-02
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     6
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 6
 Total charge density [el/Bohr^3]
      Maximum=    4.9253E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8596E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  6  -69.664208611081    -5.422E-05 9.153E-07 1.030E-04
 scprqt: <Vxc>= -3.7661878E-01 hartree
 
 Pulay update with  5 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=   1.31     -0.261     -0.564E-01  0.322E-02  0.371E-02
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     7
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 7
 Total charge density [el/Bohr^3]
      Maximum=    4.9256E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8572E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  7  -69.664223749490    -1.514E-05 8.222E-07 1.073E-05
 scprqt: <Vxc>= -3.7661573E-01 hartree
 
 Pulay update with  6 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=   1.07      0.711E-01 -0.124     -0.153E-01  0.149E-02
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     8
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 8
 Total charge density [el/Bohr^3]
      Maximum=    4.9256E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8555E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  8  -69.664228584369    -4.835E-06 9.034E-08 1.594E-06
 scprqt: <Vxc>= -3.7660864E-01 hartree
 
 Pulay update with  7 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=  0.865      0.136     -0.766E-02  0.793E-02 -0.431E-02
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER     9
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 9
 Total charge density [el/Bohr^3]
      Maximum=    4.9256E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8557E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT  9  -69.664230057412    -1.473E-06 7.008E-08 6.763E-08
 scprqt: <Vxc>= -3.7660998E-01 hartree
 
 Pulay update with  7 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=   1.02     -0.481E-02 -0.406E-01  0.269E-01  0.637E-04
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER    10
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 10
 Total charge density [el/Bohr^3]
      Maximum=    4.9256E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8557E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT 10  -69.664230547932    -4.905E-07 9.008E-09 5.006E-08
 scprqt: <Vxc>= -3.7660959E-01 hartree
 
 Pulay update with  7 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=  0.883      0.150     -0.495E-01  0.240E-01 -0.158E-01
 scfcv: previous iteration took 00 [s]
 
 ITER STEP NUMBER    11
 vtorho : nnsclo_now=1, note that nnsclo,dbl_nnsclo,istep=0 0 11
 Total charge density [el/Bohr^3]
      Maximum=    4.9256E-01  at reduced coord.    0.0000    0.0000    0.0000
      Minimum=    5.8558E-03  at reduced coord.    0.5000    0.5000    0.0000
   Integrated=    1.6000E+01
 ETOT 11  -69.664230709087    -1.612E-07 6.983E-09 2.272E-09
 scprqt: <Vxc>= -3.7660987E-01 hartree
 
 Pulay update with  7 previous iterations:
 mixing of old trial potential : alpha(m:m-4)=   1.04     -0.192E-01 -0.125      0.774E-01  0.318E-01
 Computing residual forces using gaussian functions as atomic densities
 
 Cartesian components of stress tensor (hartree/bohr^3)
  sigma(1 1)=  8.13993553E-03  sigma(3 2)=  0.00000000E+00
  sigma(2 2)=  8.13993553E-03  sigma(3 1)=  0.00000000E+00
  sigma(3 3)=  8.13993553E-03  sigma(2 1)=  0.00000000E+00
 

--- !ScfConvergenceWarning
message: |
    nstep 11 was not enough SCF cycles to converge.
...
 
 scprqt:  WARNING -
  nstep=   11 was not enough SCF cycles to converge;
  maximum residual each band. tolwfr=   1.000E-14
  iband, isppol, individual band residuals (max over all k-points):
     1     1  2.229E-17
     2     1  8.188E-18
     3     1  7.116E-18
     4     1  8.328E-18
     5     1  6.595E-18
     6     1  5.285E-18
     7     1  2.659E-17
     8     1  6.983E-09
  maximum residual=  6.983E-09 exceeds tolwfr=  1.000E-14
 
 fftdatar_write: About to write data to: testin_v1_o_DEN with iomode IO_MODE_MPI
 IO operation completed. cpu_time:       0.0 [s], walltime:       0.0 [s]
================================================================================
 
 ----iterations are completed or convergence reached----
 
 
 === Gap info ===
Not enough states to calculate the band gap.
 Mean square residual over all n,k,spin=   4.3642E-10; max=  6.9827E-09
   0.2500  0.2500  0.2500    1  6.98267E-09 kpt; spin; max resid(k); each band:
  1.19E-17 8.19E-18 7.12E-18 8.33E-18 5.38E-18 4.68E-18 2.66E-17 6.98E-09
   0.2500  0.5000  0.5000    1  3.13998E-15 kpt; spin; max resid(k); each band:
  2.23E-17 5.58E-18 4.44E-18 5.57E-18 6.60E-18 5.28E-18 2.22E-17 3.14E-15
 
 outwf: write wavefunction to file testin_v1_o_WFK, with iomode 1
 outwf with iomode: 1, cpu_time:     0.01[s], walltime:     0.12 [s]
 prteigrs : about to open file testin_v1_o_EIG
 Fermi (or HOMO) energy (hartree) =   0.06514   Average Vxc (hartree)=  -0.37661
 Eigenvalues (hartree) for nkpt=   2  k points:
 kpt#   1, nband=  8, wtk=  0.25000, kpt=  0.2500  0.2500  0.2500 (reduced coord)
  -0.76380   -0.68036   -0.67729   -0.67729   -0.67116   -0.67116   -0.16331    0.06514
 kpt#   2, nband=  8, wtk=  0.75000, kpt=  0.2500  0.5000  0.5000 (reduced coord)
  -0.75775   -0.68699   -0.67328   -0.66901   -0.66186   -0.65563   -0.12682   -0.03042
 Fermi (or HOMO) energy (eV) =   1.77260   Average Vxc (eV)= -10.24808
 Eigenvalues (   eV  ) for nkpt=   2  k points:
 kpt#   1, nband=  8, wtk=  0.25000, kpt=  0.2500  0.2500  0.2500 (reduced coord)
 -20.78414  -18.51364  -18.43011  -18.43011  -18.26329  -18.26329   -4.44382    1.77260
 kpt#   2, nband=  8, wtk=  0.75000, kpt=  0.2500  0.5000  0.5000 (reduced coord)
 -20.61956  -18.69386  -18.32082  -18.20481  -18.01022  -17.84052   -3.45108   -0.82789
 Total charge density [el/Bohr^3]
      Maximum=    4.9256E-01  at reduced coord.    0.0000    0.0000    0.0000
 Next maximum=    4.6364E-01  at reduced coord.    0.0500    0.8500    0.2000
      Minimum=    5.8558E-03  at reduced coord.    0.5000    0.5000    0.0000
 Next minimum=    6.1049E-03  at reduced coord.    0.5000    0.5000    0.2000
   Integrated=    1.6000E+01
 
 Cartesian components of stress tensor (hartree/bohr^3)
  sigma(1 1)=  8.13993553E-03  sigma(3 2)=  0.00000000E+00
  sigma(2 2)=  8.13993553E-03  sigma(3 1)=  0.00000000E+00
  sigma(3 3)=  8.13993553E-03  sigma(2 1)=  0.00000000E+00
 
-Cartesian components of stress tensor (GPa)         [Pressure= -2.3949E+02 GPa]
- sigma(1 1)=  2.39485131E+02  sigma(3 2)=  0.00000000E+00
- sigma(2 2)=  2.39485131E+02  sigma(3 1)=  0.00000000E+00
- sigma(3 3)=  2.39485131E+02  sigma(2 1)=  0.00000000E+00
 
== END DATASET(S) ==============================================================
================================================================================
 
 -outvars: echo values of variables after computation  --------
 
 These variables are accessible in NetCDF format (testin_v1_o_OUT.nc)

-          iomode           1
            acell      1.0000000000E+01  1.0000000000E+01  1.0000000000E+01 Bohr
              amu      1.73040000E+02
        autoparal           1
           bandpp           2
          chkexit           2
     densfor_pred           6
           dielng      8.00000000E-01 Bohr
             ecut      8.00000000E+00 Hartree
           enunit           2
           etotal     -6.9664230709E+01
            fcart      0.0000000000E+00  0.0000000000E+00  0.0000000000E+00
-          fftalg         401
            intxc           1
              kpt      1.00000000E+00  1.00000000E+00  1.00000000E+00
                       1.00000000E+00  2.00000000E+00  2.00000000E+00
           kptnrm      4.00000000E+00
           kptopt           0
P           mkmem           2
            natom           1
            nband           8
            ngfft          20      20      20
             nkpt           2
            nline           3
-          npband           4
-           npfft           4
            nstep          11
             nsym          24
           ntypat           1
              occ      2.000000  2.000000  2.000000  2.000000  2.000000  2.000000
                       2.000000  2.000000
           occopt           0
           ortalg          -2
        paral_kgb           1
           prtvol           1
            rprim      0.0000000000E+00  5.0000000000E-01  5.0000000000E-01
                       5.0000000000E-01  0.0000000000E+00  5.0000000000E-01
                       5.0000000000E-01  5.0000000000E-01  0.0000000000E+00
          spgroup         216
           strten      8.1399355275E-03  8.1399355275E-03  8.1399355275E-03
                       0.0000000000E+00  0.0000000000E+00  0.0000000000E+00
           symrel      1  0  0   0  1  0   0  0  1       0  1 -1   1  0 -1   0  0 -1
                       0 -1  1   0 -1  0   1 -1  0      -1  0  0  -1  0  1  -1  1  0
                       0  1  0   0  0  1   1  0  0       1  0 -1   0  0 -1   0  1 -1
                       0 -1  0   1 -1  0   0 -1  1      -1  0  1  -1  1  0  -1  0  0
                       0  0  1   1  0  0   0  1  0       0  0 -1   0  1 -1   1  0 -1
                       1 -1  0   0 -1  1   0 -1  0      -1  1  0  -1  0  0  -1  0  1
                       1  0 -1   0  1 -1   0  0 -1       0  1  0   1  0  0   0  0  1
                      -1  0  1  -1  0  0  -1  1  0       0 -1  0   0 -1  1   1 -1  0
                      -1  1  0  -1  0  1  -1  0  0       1 -1  0   0 -1  0   0 -1  1
                       0  0 -1   1  0 -1   0  1 -1       0  0  1   0  1  0   1  0  0
                       0 -1  1   1 -1  0   0 -1  0      -1  0  0  -1  1  0  -1  0  1
                       1  0  0   0  0  1   0  1  0       0  1 -1   0  0 -1   1  0 -1
           tolwfr      1.00000000E-14
            typat      1
         wfoptalg         114
              wtk        0.25000    0.75000
            znucl       70.00000
 
================================================================================
 

================================================================================

 Suggested references for the acknowledgment of ABINIT usage.
.
.
.
 
 Calculation completed.
.Delivered   1 WARNINGs and   2 COMMENTs to log file.

--- !FinalSummary
program: abinit
version: 8.6.3
start_datetime: Mon Apr 23 17:25:25 2018
end_datetime: Mon Apr 23 17:25:29 2018
overall_cpu_time:          54.5
overall_wall_time:          70.0
exit_requested_by_user: no
timelimit: 0
pseudos:
    Yb  : 56bf3aea2f5a48028cfd174a5aa25641
usepaw: 0
mpi_procs: 16
omp_threads: 1
num_warnings: 1
num_comments: 2
...
 Memory Consumption Report:
   Tot. No. of Allocations             :  0
   Tot. No. of Deallocations           :  0
   Remaining Memory (B)                :  0
   Memory occupation:
     Peak Value (MB)                   :  0
     for the array                     : null
     in the routine                    : null
 Max No. of dictionaries used          :  807 #( 797 still in use)
 Number of dictionary folders allocated:  1
Job  /opt/lsf/10.1/linux2.6-glibc2.3-x86_64/bin/intelmpi_wrapper ./abinit

TID   HOST_NAME   COMMAND_LINE            STATUS            TERMINATION_TIME
===== ========== ================  =======================  ===================
00000 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00001 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00002 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00003 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00004 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00005 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00006 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00007 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00008 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00009 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00010 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00011 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00012 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00013 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00014 dmp056     ./abinit          Done                     04/23/2018 17:25:30
00015 dmp056     ./abinit          Done                     04/23/2018 17:25:30


Let me thank you, Eric, for your help. I mark this now as solved.
Best regards
Holger

Locked