2 posts • Page 1 of 1
Is there a handy reference for predicting how many bands and kpts are needed for a GW calculation to converge? For example, in my Ag(111) calculations, i'm doing a preliminary GW calculation of the commensurate unit cell (3 layers, angdeg 90 90 120, xred 0 0 0 1/3 2/3 1/3 2/3 1/3 2/3). In order to get the GGA calculation to converge, I needed an ecut of 26 Ha, and a grid of ngkpt 12 12 12 resulting in 193 kpts. Is there a way to estimate "reasonable" staring point for nbandkss, ecuteps, ecutwfn and ecutsig? For the silicon tutorial, the ecuts around 60-75% of the ecut used for the LDA calculation, and the nband is ~40-50 times the number of occupied bands. Can are these estimates also ok starting points for metals?
On a related note to scaling, does gwpara 2 offer significant speed increases over parallelisation over kpoints for creating the screening & self energy calculations?
Mnemonics: GW PARAllelization level
Characteristic: GW, PARALLEL
Variable type: integer
Default is 1 TODO: default should be 2.
Only relevant if optdriver=3 or 4, that is, screening or sigma calculations.
gwpara is used to choose between the two different parallelization levels available in the GW code. The available options are:
* =1 => parallelisation on k points
* =2 => parallelisation on bands
In the present status of the code, only the parallelization over bands (gwpara=2) allows to reduce the memory allocated by each processor.
Using gwpara=1, indeed, requires the same amount of memory as a sequential run, irrespectively of the number of CPU's used.
A reduction of the requireed memory can be achieved by opting for an out-of-core solution (mkmem=0, only coded for optdriver=3) at the price of a drastic worsening of the performance.