Giter VIP home page Giter VIP logo

dftbplus's Introduction

DFTB+: general package for performing fast atomistic calculations

LGPL v3.0

DFTB+ is a software package for carrying out fast quantum mechanical atomistic calculations based on the Density Functional Tight Binding method. The most recent features are described in the (open access) DFTB+ paper.

DFTB+ website

DFTB+ can be either used as a standalone program or integrated into other software packages as a library.

Installation

Obtaining via Conda

The preferred way of obtaining DFTB+ is to install it via the conda package management framework using Miniconda or Anaconda. Make sure to add/enable the conda-forge channel in order to be able to access DFTB+, and ensure that the conda-forge channel is the first repository to be searched for packages. (Please consult the conda documentation for how to set-up your conda environment.)

We recommend the use of the mamba installer, as we have experienced dependency resolution problems with the original conda installer in the past:

conda install -n base mamba

We provide several build variants, choose the one suiting your needs. For example, by issuing :

mamba install 'dftbplus=*=nompi_*'

or :

mamba install 'dftbplus=*=mpi_mpich_*'

or :

mamba install 'dftbplus=*=mpi_openmpi_*'

to get the last stable release of DFTB+ with, respectively, serial (OpenMP-threaded) build or with MPI-parallelized build using either the MPICH or the Open MPI framework.

Downloading the binary

A non-MPI (OpenMP-threaded) distribution of the latest stable release can be found on the stable release page.

Building from source

Note: This section describes the building with default settings (offering only a subset of all possible features in DFTB+) in a typical Linux environment. For more detailed information on the build customization and the build process, consult the detailed building instructions in INSTALL.rst.

Download the source code from the stable release page.

You need CMake (>= 3.16) to build DFTB+. If your environment offers no CMake or only an older one, you can easily install the latest CMake via Python's pip command:

pip install cmake

Start CMake by passing your compilers as environment variables (FC and CC), and the location where the code should be installed and the build directory (_build) as options:

FC=gfortran CC=gcc cmake -DCMAKE_INSTALL_PREFIX=$HOME/opt/dftb+ -B _build .

If the configuration was successful, start the build with:

cmake --build _build -- -j

After successful build, you should test the code. First download the files needed for the test :

./utils/get_opt_externals slakos
./utils/get_opt_externals gbsa

or :

./utils/get_opt_externals ALL

and then run the tests with :

pushd _build; ctest -j; popd

If the tests were successful, install the package with :

cmake --install _build

For further details see the detailed building instructions.

Parameterisations

In order to carry out calculations with DFTB+, you need according parameterisations (a.k.a. Slater-Koster files). You can download them from dftb.org.

Documentation

Consult following resources for documentation:

Citing

When publishing results obtained with DFTB+, please cite following works:

Contributing

New features, bug fixes, documentation, tutorial examples and code testing is welcome in the DFTB+ developer community!

The project is hosted on github. Please check CONTRIBUTING.rst and the DFTB+ developers guide for guide lines.

We are looking forward to your pull request!

License

DFTB+ is released under the GNU Lesser General Public License. See the included LICENSE file for the detailed licensing conditions.

dftbplus's People

Contributors

adrieldom avatar alexbuccheri avatar aradi avatar awvwgk avatar baradi09 avatar bhourahine avatar c3091013 avatar deshaym avatar e-kwsm avatar fbonafe avatar helmutwecke avatar honza-r avatar inseonglee avatar jhrmnn avatar jjkranz avatar jubich avatar lexming avatar lvgee avatar pecchia avatar phildohmen avatar pibemanden avatar quitocam avatar scienception avatar steinmig avatar stoehrm avatar terminationshock avatar thomas-niehaus avatar tjgiese avatar tomaskubar avatar vanderhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dftbplus's Issues

API cannot be used twice

Description

It appears that dftbp_final does not clean up everything. If you try to use the API again after a call to dftbp_final, the program will abort during the second call to dftb_process_input as it tries to allocate memory that's already allocated.

Steps to reproduce

  1. Modify the test_initc tester to run twice:
diff --git a/test/api/mm/testers/test_fileinitc.c b/test/api/mm/testers/test_fileinitc.c
index e222e37..c999395 100644
--- a/test/api/mm/testers/test_fileinitc.c
+++ b/test/api/mm/testers/test_fileinitc.c
@@ -40,6 +40,8 @@ int main()
   double *gradients, *gross_charges;
   int natom;
 
+  for(int ii=0; ii<2; ii++) {
+
   /* Initialise DFTB+ input tree from input in external file */
   dftbp_init(&calculator, NULL);
   dftbp_get_input_from_file(&calculator, "dftb_in.hsd", &input);
@@ -71,6 +73,7 @@ int main()
   /*  clean up */
   dftbp_final(&calculator);
 
+  }
 
   /* Save some data for the internal test system */
   dftbp_write_autotest_tag(NR_OF_ATOMS, 0, mermin_energy, gradients, gross_charges, NULL);

Notice that dftbp_final is correctly called before the second call to dftbp_init.

  1. Do make test_api.

test_initc will report Incomplete.

  1. Look at _build/test/api/mm/testcases/fileinitc/stderror.log

You will find the following error message, written right before the program aborted.

At line 1086 of file initprogram.f90
Fortran runtime error: Attempting to allocate already allocated variable 'kpoint'
Command exited with non-zero status 2

Communicating errors through the API

Currently, when DFTB+ encounters a problem with the user's input, it logs an error message and aborts the program. (sometimes even with an exit code of 0!)

This makes it difficult to use as a library, as the code using DFTB+ may have unfinished work or cleanup that it needs to perform. It also may want to display the error message in some fashion (e.g. on its own STDERR, or in a popup notification), and perhaps even continue running. I suspect that most larger applications (e.g. anything with a UI) will find this to be a fairly raw dealbreaker.

Are there any plans to refactor DFTB+ to propagate errors back up to the caller?

(granted, this could be very laborious and could require touching a large portion of the codebase, so I can also understand not wanting to do so...)

Compilation

Dear developers,

I am a user of dftbplus. I randomly came across this GitHub page while I was trying to install a new version of dftb+. Following the installation instruction, however, I didn't manage to go through.

At the stage of compilation after copying the sys/make.x86_64-linux-gnu, I invoked make command and met the following error message.

[qzhu@master dftbplus]$ make
mkdir -p /home/qzhu/QZ/dftbplus/_build/external/fsockets
make -C /home/qzhu/QZ/dftbplus/_build/external/fsockets
-f /home/qzhu/QZ/dftbplus/external/fsockets/make.dpbuild
ROOT=/home/qzhu/QZ/dftbplus BUILDROOT=/home/qzhu/QZ/dftbplus/_build
make[1]: Entering directory `/home/qzhu/QZ/dftbplus/_build/external/fsockets'
gcc -c -o sockets.o /home/qzhu/QZ/dftbplus/external/fsockets/sockets.c
gfortran -O2 -funroll-all-loops -fopenmp -o fsockets.o -c /home/qzhu/QZ/dftbplus/external/fsockets/fsockets.f90
/home/qzhu/QZ/dftbplus/external/fsockets/fsockets.f90:212.43:

call readbuffer_socket_c(sockfd, c_loc(fdata), 8_c_int * size(fdata, kind=c
                                       1

Error: Assumed-shape array 'fdata' at (1) cannot be an argument to the procedure 'c_loc' because it is not C interoperable
/home/qzhu/QZ/dftbplus/external/fsockets/fsockets.f90:166.44:

call writebuffer_socket_c(sockfd, c_loc(fdata), 8_c_int * size(fdata, kind=
                                        1

Error: Assumed-shape array 'fdata' at (1) cannot be an argument to the procedure 'c_loc' because it is not C interoperable
make[1]: *** [fsockets.o] Error 1
make[1]: Leaving directory `/home/qzhu/QZ/dftbplus/_build/external/fsockets'
make: *** [external_fsockets] Error 2

Can anyone tell me how to overcome this issue with fscokets?

best regards,
Qiang

Variation in charge during mixer cycle

Output charge can drift with some of the mixers during calculations.

May require either rescaling of charges or regularization of mixer linear algebra to fix.

dftb+ 18.1 mpi compilation error (scc.F90)

Dear all,
I am compiling DFTBP 18.1 with MPI, intel fortran and MKL libraries. Every time, I get stuck while compiling scc.F90 (in prog/dftb+/lib_dftb/). Is there something in this code which is incompatible with what I am trying? Any help will be much appreciated.

mpiifort -O2 -qopenmp -ip -standard-semantics -heap-arrays 10 -I/home/agaur/Software/DFTBP18/_build/external/xmlf90 -I/home/agaur/Software/DFTBP18/_build/external/dftd3 -I/home/agaur/Software/DFTBP18/_build/external/mpifx -I/home/agaur/Software/DFTBP18/_build/external/scalapackfx -o scc.o -c scc.f90
scc.f90(1103): catastrophic error: Internal compiler error: internal abort Please report this error along with the circumstances in which it occurred in a Software Problem Report. Note: File and line given may not be explicit cause of this error.
compilation aborted for scc.f90 (code 1)
make[1]: *** [scc.o] Error 1

regards

Making 'install_api' useful for automated build systems

At present, the files produced by make install_api are not useful for a code that wants to link to the C api of DFTB+ in an automated build system. There are several issues:

  • The C header file is not installed.
  • When DFTB+ is built, so is libxmlf90, but this library is not installed. (and I don't see any option to build against a preinstalled version)
  • libdftb+ has numerous static dependencies that may depend on your system, compilation options and choice of Fortran compiler, such as libgfortran. An external build system attempting to link to libdftb+ has no way to discover this list of dependencies and emit the appropriate -l flags.
  • It would be nice to expose API_VERSION in some way.

For the first two bullets, it would be nice to have these missing files installed. For the latter two, generating a pkg-config file would be a huge help. Basically, the ideal circumstance would be for make install_api to automatically generate a pkg-config file like this:

$INSTALL_PREFIX/lib/pkgconfig/libdftb+.pc

prefix=/home/lampam/.local
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include
api_version=0.2.0

Name: DFTB+
Description: C library for DFTB+, a fast and efficient versatile quantum mechanical simulation software package.
Version: 19.1
Requires.private:
Libs: -L${libdir} -ldftb+ -lxmlf90
Libs.private: -llapack -lblas  -lgfortran -lm -lgomp
Cflags: -I${includedir}

so that:

  • pkg-config --cflags --libs --static libdftb+ produces all of the flags necessary to build a program like test_fileinitc
  • pkg-config --variable api_version libdftb+ prints the API version.

Does this sound reasonable?


As an aside, it is bothersome to see the installer place such a large number of .mod files in include/. I am not familiar with these files and how module inclusion works in Fortran, but is it possible that these could somehow be cleanly collected into one subdirectory?

'make test' reports failure; everything is "Not run"

Attempted on both github master (be6fde2) and the 18.2 release.


On my machine, make test emits a whole bunch of lines with the word TODO, followed by a summary which appears to suggest that no tests were run, and a nonzero exit code. Is the testing code implemented? I'm not sure what to make of this...

Similar results occur for test_api on master.


$ utils/get_opt_externals
$ make
$ make test
mkdir -p /home/lampam/Downloads/dftbplus-18.2/_build
[ -r /home/lampam/Downloads/dftbplus-18.2/RELEASE ] && cp -a /home/lampam/Downloads/dftbplus-18.2/RELEASE /home/lampam/Downloads/dftbplus-18.2/_build/RELEASE \
        || /home/lampam/Downloads/dftbplus-18.2/utils/build/update_release /home/lampam/Downloads/dftbplus-18.2/_build/RELEASE \
        || echo "(UNKNOWN RELEASE)" > /home/lampam/Downloads/dftbplus-18.2/_build/RELEASE
mkdir -p /home/lampam/Downloads/dftbplus-18.2/_build/external/xmlf90
make -C /home/lampam/Downloads/dftbplus-18.2/_build/external/xmlf90 \
          -f /home/lampam/Downloads/dftbplus-18.2/external/xmlf90/make.dpbuild \
          ROOT=/home/lampam/Downloads/dftbplus-18.2 BUILDROOT=/home/lampam/Downloads/dftbplus-18.2/_build
make[1]: Entering directory '/home/lampam/Downloads/dftbplus-18.2/_build/external/xmlf90'
touch -r libxmlf90.a /home/lampam/Downloads/dftbplus-18.2/_build/external/xmlf90/BUILD_TIMESTAMP
make[1]: Leaving directory '/home/lampam/Downloads/dftbplus-18.2/_build/external/xmlf90'
mkdir -p /home/lampam/Downloads/dftbplus-18.2/_build/external/fsockets
make -C /home/lampam/Downloads/dftbplus-18.2/_build/external/fsockets \
          -f /home/lampam/Downloads/dftbplus-18.2/external/fsockets/make.dpbuild \
          ROOT=/home/lampam/Downloads/dftbplus-18.2 BUILDROOT=/home/lampam/Downloads/dftbplus-18.2/_build
make[1]: Entering directory '/home/lampam/Downloads/dftbplus-18.2/_build/external/fsockets'
touch -r libfsockets.a /home/lampam/Downloads/dftbplus-18.2/_build/external/fsockets/BUILD_TIMESTAMP
make[1]: Leaving directory '/home/lampam/Downloads/dftbplus-18.2/_build/external/fsockets'
mkdir -p /home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+
make -C /home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+ -f /home/lampam/Downloads/dftbplus-18.2/prog/dftb+/make.build \
    ROOT=/home/lampam/Downloads/dftbplus-18.2 BUILDROOT=/home/lampam/Downloads/dftbplus-18.2/_build
make[1]: Entering directory '/home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+'
touch -r /home/lampam/Downloads/dftbplus-18.2/_build/external/xmlf90/BUILD_TIMESTAMP _extlib_xmlf90
touch -r /home/lampam/Downloads/dftbplus-18.2/_build/external/fsockets/BUILD_TIMESTAMP _extlib_fsockets
make[1]: Leaving directory '/home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+'
make -C /home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+ \
    -f /home/lampam/Downloads/dftbplus-18.2/prog/dftb+/make.build \
    ROOT=/home/lampam/Downloads/dftbplus-18.2 BUILDROOT=/home/lampam/Downloads/dftbplus-18.2/_build test
make[1]: Entering directory '/home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+'
derivatives/Si_2_Delta:  preparing .. running .. comparing .. TODO.
legacy/Si2_oldSKinterp:  preparing .. running .. comparing .. TODO.
legacy/Si2_polyRep:      preparing .. running .. comparing .. TODO.
non-scc/Si_2:    preparing .. running .. comparing .. TODO.
non-scc/Si_2_independentk:       preparing .. running .. comparing .. TODO.
scc/H2O2_3rdfull-damp:   preparing .. running .. comparing .. TODO.
scc/H3:  preparing .. running .. comparing .. TODO.
spin/H2:         preparing .. running .. comparing .. TODO.
analysis/C2H4_localise:  preparing .. running .. comparing .. TODO.
analysis/graphene_localise:      preparing .. running .. comparing .. TODO.
dftb+u/CH3:      preparing .. running .. comparing .. TODO.
dispersion/2H2O:         preparing .. running .. comparing .. TODO.
dispersion/2H2O_uff:     preparing .. running .. comparing .. TODO.
geoopt/H2O-nonscc:       preparing .. running .. comparing .. TODO.
non-scc/CH4:     preparing .. running .. comparing .. TODO.
non-scc/decapentaene:    preparing .. running .. comparing .. TODO.
scc/2H2O-3rdorder:       preparing .. running .. comparing .. TODO.
scc/2H2O-3rdorder_read:  preparing .. running .. comparing .. TODO.
scc/C2H6_3rdfull:        preparing .. running .. comparing .. TODO.
scc/C2H6_3rdfull-damp:   preparing .. running .. comparing .. TODO.
scc/H2O2_3rdfull:        preparing .. running .. comparing .. TODO.
scc/H2O2-3rdfull-ldep:   preparing .. running .. comparing .. TODO.
scc/H2O-extchrg:         preparing .. running .. comparing .. TODO.
scc/H2O-extchrg-blur:    preparing .. running .. comparing .. TODO.
scc/H2O-extchrg-direct:  preparing .. running .. comparing .. TODO.
scc/H2O-extchrg-periodic:        preparing .. running .. comparing .. TODO.
scc/H2O-extfield:        preparing .. running .. comparing .. TODO.
spin/H2O-periodic:       preparing .. running .. comparing .. TODO.
spin/OH_commonFermi:     preparing .. running .. comparing .. TODO.
derivatives/Si_2_Richardson:     preparing .. running .. comparing .. TODO.

                            (snip)

md/SiC64-xlbomdfast:     preparing .. running .. comparing .. TODO.
timedep/C66O10N4H44_OscWindow:   preparing .. running .. comparing .. TODO.
timedep/C60_OscWindow:   preparing .. running .. comparing .. TODO.
timedep/C60_EandOsc:     preparing .. running .. comparing .. TODO.
non-scc/Si_216:  preparing .. running .. comparing .. TODO.
md/ptcda-xlbomd-ldep:    preparing .. running .. comparing .. TODO.
geoopt/Vsi+O_lbfgs:      preparing .. running .. comparing .. TODO.
geoopt/Vsi+O:    preparing .. running .. comparing .. TODO.
==============================================================================
TEST SUMMARY
------------------------------------------------------------------------------
Match:
    None

Not run:
    derivatives/Si_2_Delta              analysis/Fe2_antiferromagnetic
    legacy/Si2_oldSKinterp              non-scc/10-0Ctube_Efield
    legacy/Si2_polyRep                  non-scc/10-10Ctube
    non-scc/Si_2                        analysis/10-0Ctube_ESP
    non-scc/Si_2_independentk           md/H2O-extfield
    scc/H2O2_3rdfull-damp               non-scc/10-0Ctube
    scc/H3                              spin/Fe4_noncolinear
    spin/H2                             non-scc/HBDI-neutral
    analysis/C2H4_localise              non-scc/HBDI-cationic
    analysis/graphene_localise          scc/C60
    dftb+u/CH3                          analysis/C60_ESP
    dispersion/2H2O                     spin/H2O
    dispersion/2H2O_uff                 timedep/cyclopentadienyl
    geoopt/H2O-nonscc                   timedep/C6H6-Sym
    non-scc/CH4                         timedep/C6H6-Sym_Arnoldi
    non-scc/decapentaene                timedep/OCH2-S1-Opt
    scc/2H2O-3rdorder                   h-bonds/H5_defaults
    scc/2H2O-3rdorder_read              h-bonds/H5_ONS_forces
    scc/C2H6_3rdfull                    h-bonds/H5_ONS_forces_eq
    scc/C2H6_3rdfull-damp               md/Si_8_restart
    scc/H2O2_3rdfull                    spin/GaAs_2
    scc/H2O2-3rdfull-ldep               spin/GaAs-spin-ext
    scc/H2O-extchrg                     spinorbit/As4S4
    scc/H2O-extchrg-blur                dispersion/DNA-damped
    scc/H2O-extchrg-direct              md/ice_Ic
    scc/H2O-extchrg-periodic            derivatives/C6H6_scc
    scc/H2O-extfield                    dftb+u/GaAs_2
    spin/H2O-periodic                   geoopt/diamond_presure
    spin/OH_commonFermi                 scc/C60_Fermi
    derivatives/Si_2_Richardson         scc/10-0Ctube-extfield
    geoopt/H2O-constr                   scc/SiC_64
    legacy/SiC_polyRep                  scc/SiC_32-extchrg-blur
    md/SiH-surface_restart              md/H3
    scc/C4H8_3rdfull                    sockets/diamond
    scc/C4H8_3rdfull-damp               sockets/diamond_exit
    scc/CH2_n_3rdfull                   sockets/H2O
    scc/CH2_n_3rdfull_damp              sockets/H2O_cluster
    scc/GaAs_2_restart                  spinorbit/GaAs_2
    spin/Fe4_commonFermi                geoopt/Si_2_latconst
    timedep/2CH3-Temp                   md/SiH-surface
    timedep/2CH3-Triplet-Temp           geoopt/Si_2_lattice_lbfgs
    timedep/NO                          geoopt/Si_2_lattice
    timedep/propadiene_OscWindow        md/Si_8_NHC
    analysis/H2O_ESP                    timedep/C66O10N4H44_Ewindow
    analysis/H2O_mdESP                  spin/Fe4_Fermi
    geoopt/H2O_lbfgs                    non-scc/Si41C23N35
    geoopt/H2O                          geoopt/diamond_isotropic
    md/Si_8_NHC_restart                 spinorbit/EuN
    non-scc/GaAs_2                      geoopt/Si_6
    scc/GaAs_2                          md/ptcda-xlbomdfast
    scc/GaAs_2_customU                  spinorbit/EuN_customU
    spinorbit/Fe2_dual                  dispersion/DNA
    spinorbit/Fe2_dual_field            spinorbit/Si2_dual
    spinorbit/Si_2                      md/Si_8
    dispersion/DNA_uff                  md/Si_8-thermostat2
    timedep/C4H6-S1-Force               geoopt/GaAs_8_latconst_lbfgs
    timedep/C4H6-Singlet                geoopt/GaAs_8_latconst
    timedep/C4H6-Singlet_wfn            analysis/Ga4As4_ESP
    timedep/C4H6-T1-Force               md/Si_8-thermostat
    timedep/C4H6-Triplet                md/ptcda-xlbomdfast-ldep
    dftb+u/Fe4                          md/Si_8-tempprofile
    dftb+u/Fe4_read                     geoopt/Vsi+O-nonscc
    geoopt/Cchain_lattice_lbfgs         non-scc/Si_384
    geoopt/Cchain_lattice               md/DNA
    scc/H2O+CH3COOH-3rdorder            md/DNA_Berendsen2
    spinorbit/Fe2                       md/ptcda-xlbomd
    md/SiC64-xlbomdfast-T0              non-scc/Si_216
    md/SiC64-xlbomdfast                 md/ptcda-xlbomd-ldep
    timedep/C66O10N4H44_OscWindow       geoopt/Vsi+O_lbfgs
    timedep/C60_OscWindow               geoopt/Vsi+O
    timedep/C60_EandOsc
------------------------------------------------------------------------------
Status: FAIL
------------------------------------------------------------------------------
Details in:
    _autotest/stderror.log
    _autotest/tagdiff.log
==============================================================================
make[1]: *** [/home/lampam/Downloads/dftbplus-18.2/prog/dftb+/make.build:76: test] Error 1
make[1]: Leaving directory '/home/lampam/Downloads/dftbplus-18.2/_build/prog/dftb+'
make: *** [makefile:92: test_dftb+] Error 2

The log files mentioned _build/prog/dftb+/_autotest/{stderror,tagdiff}.log are both empty.

Sever problems with MPI version of dftb+ 18.2

I do have a particular problem which I hope can be solved.

I am using the last MPI version of dftb+ 18.2 (downloaded the 12/03/2019), compiled with the intel2017 package (ifort, impi, mkl). I am also using the dftd3 module.

My simulation consists of a series of single point calculations on slightly different geometries. To speed up it, I would like to read at the N step the charges.bin file produced at the N-1 one.

I did try many set up, but in all the cases I got two different kind of error:

  1. the code stops because the charges calculated at the N-1 step are not correct
    -> External file of charges has a total charge: 6408.033605, instead of 6408.000000

  2. the code remains hanged up (so it is still running) but it does not write anymore.
    It stops writing in the middle of the SCC cycle!!!

As an additional information, if I run the same identical series of calculations (except for the D3 dispersions which were not implemented in version < 1.3), with an old MPI version (the mpi-r4473 which is the MPI version of dftb+1.2) I never had such problems!

Thank you for the help!

Mixer chain

Would be good to have a chain of mixers, which can activate different mixers during the scc-cycle based on the convergence (e.g. starting with simple mixer with small mixing parameter at the beginning and changing to Broyden or DIIS when we are around the right solution).

Excitation energies during MD at T > 0

Need an option to suppress excited state force evaluation, when only the ground state is driving the dynamics. Especially relevant for MD at finite electron temperature, where the excitation energy (or energies) is being tracked but fractional occupation is needed for the ground state convergence.

4 processors with ifort17/mkl

Both spinorbit/Fe2 and md/ptcda-xlbomd-ldep seem to have intermittent non-deterministic problems when run with ifort/mkl MPI on 4 processors.

Waveplot not implemented with MPI

Hello,

I am having trouble plotting the cube files using wave plot. It appears that it has not been parallelised. Currently when submitting the job with N number of cores, it submits the job N number of times.

Thanks

Generalize Coulomb Matrix Setup / Ewald summation

Right now the Coulomb matrix setup seems to only support 1/r terms. Screened Coulomb interactions (like erf(r)/r or Klopman–Ohno) could also be handled by the current implementation, without the need of reproduce existing code like done in #314.

Possible Solution

  1. A Coulomb container similar to the one used for the repulsion energy might work here and would allow to generalize the Coulomb matrix setup.
  2. Alternatively one could use the existing routines, remove the 1/r afterwards and add the screened Coulomb interactions back. But this seems somewhat impractical at best.

MPI externals missing from release

Hi, the release tar.gz unfortunately does not include mpifx and scalapackfx but instead only an empty directory origin because you just linked the external repos in your git tree probably.
This leads to a compile error when setting WITH_MPI := 1 because the source files for those external dependencies won't be found.
The get_opt_externals script also does not seem to take care of this, nor is it documented anywhere that one has to download those sources manually before the MPI version can be built.
Is this intentional?

Question about external point charges

Hi,
this is not a bug report but only a question about how DFTB+ works.

What I'd like to know is what contributions the energy and gradients contain for a calculation with external point charges.

What I thought until now (after comparison with Gaussian calculations) is:

  • The energy contains:

    • of course everything that has to do with the atoms
    • the interactions between atoms and external charges
    • the interactions between the external charges ???
  • The gradients contain:

    • also of course all interactions between the atoms
    • the interactions between atoms and external charges (on atoms as forces and on external charges as forces_ext_charges)
    • no interactions between the external charges

As I watched some very strange behavior I'm not sure anymore if that is correct. Especially I'm not sure if the interactions between the external charges are included in the energy because some of the strange behaviour disappears when I exclude them from the calculations.

Thanks a lot for your answer!

parallel calculation and timing in DFTB+

Dear developers,

Please forgive me if this is the right place to ask the following questions.

Following the github instruction, I managed to compile a parallel version of dftb+.
https://github.com/dftbplus/dftbplus/blob/master/INSTALL.rst

However, I was confused by the way to invoke parallel runs.
When I used the command of 'mpirun -np 2 dftb+', I got the following output:

***  Geometry step: 0, Lattice step: 0
 iSCC Total electronic   Diff electronic      SCC error
 iSCC Total electronic   Diff electronic      SCC error
    1   -0.46729427E+03    0.00000000E+00    0.67880049E-01
    1   -0.46729427E+03    0.00000000E+00    0.67880049E-01
    2   -0.46729980E+03   -0.55300039E-02    0.40378966E-01
    2   -0.46729980E+03   -0.55300039E-02    0.40378966E-01
    3   -0.46730044E+03   -0.64250909E-03    0.10397772E-01
    3   -0.46730044E+03   -0.64250909E-03    0.10397772E-01
    4   -0.46730049E+03   -0.46662721E-04    0.44531425E-02
    4   -0.46730049E+03   -0.46662721E-04    0.44531425E-02
    5   -0.46730050E+03   -0.11560189E-04    0.19768514E-02
    5   -0.46730050E+03   -0.11560189E-04    0.19768514E-02
    6   -0.46730051E+03   -0.27589400E-05    0.56576320E-03
    6   -0.46730051E+03   -0.27589400E-05    0.56576320E-03
    7   -0.46730051E+03   -0.18016573E-06    0.16029514E-03
    7   -0.46730051E+03   -0.18016573E-06    0.16029514E-03
    8   -0.46730051E+03   -0.38967755E-07    0.39942462E-04
    8   -0.46730051E+03   -0.38967755E-07    0.39942462E-04
    9   -0.46730051E+03   -0.25842724E-08    0.12813492E-04
    9   -0.46730051E+03   -0.25842724E-08    0.12813492E-04
   10   -0.46730051E+03   -0.32571279E-09    0.50945690E-05
   10   -0.46730051E+03   -0.32571279E-09    0.50945690E-05

It looks like two processes are working separately, instead of real parallel calculation!
What's the right way to run parallel calculation?

Moreover, can DFTB+ output the timing info for each step of calculation, like the way done in VASP as follows?

 General timing and accounting informations for this job:
 ========================================================
  
                  Total CPU time used (sec):        0.767
                            User time (sec):        0.685
                          System time (sec):        0.082
                         Elapsed time (sec):        1.121
  
                   Maximum memory used (kb):       31100.
                   Average memory used (kb):           0.
  
                          Minor page faults:        30312
                          Major page faults:            0
                 Voluntary context switches:          373

I think this could be very useful for checking the performance of parallel calculations.

MPI aborts when development version compiled with Intel 2017 compilers + MPI

I'm trying to build the development version (revision d07f92e) on our clusters with the Intel compilers 2017 update4, matching Intel MPI and MKL and I'm getting MPI aborts on two of the tests - analysis/C2H4_localise and analysis/graphene_localise.

Looking in _autotest/stderror.log I'm getting the following errors:

======= analysis/C2H4_localise =======
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor
0.01user 0.01system 0:00.04elapsed 71%CPU (0avgtext+0avgdata 26596maxresident)k
0inputs+40outputs (0major+4195minor)pagefaults 0swaps

======= analysis/graphene_localise =======
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1
:
system msg for write_line failure : Bad file descriptor
0.01user 0.01system 0:00.03elapsed 96%CPU (0avgtext+0avgdata 25788maxresident)k
0inputs+48outputs (0major+4821minor)pagefaults 0swaps

Any suggestions?

Segmentation fault - invalid memory reference

Hello.
When I ran DFTB+, I met the following error message:
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0 0x7f4d6490cd8a in ???
#1 0x7f4d6490bfb3 in ???
#2 0x7f4d639c327f in ???
#3 0x7f4d63a193b1 in ???
#4 0x7f4d63a190bd in ???
#5 0x7f4d62b79398 in ???
#6 0x7f4d62b7ab06 in ???
#7 0x7f4d64db07b4 in ???
#8 0x7f4d64daf33c in ???
#9 0x7f4d64dd227c in ???
#10 0x7f4d6509160f in ???
#11 0x72a6d2 in ???
#12 0x444d44 in ???
#13 0x406de3 in ???
#14 0x406bdc in ???
#15 0x7f4d639af3d4 in ???
#16 0x406c0c in ???
#17 0xffffffffffffffff in ???

I would be very glad if this issue could be fixed.
best regards,
Feng

WITH_SOCKETS = 0 is not compiling

Multiple errors in the parser.F90 will not compile due to missing pre-processor statements. (the control type has socketInput as a preprocessed member.

Revision of non-converged forces

Instead of an option to disable force evaluation with non-converged SCC, an option to set a tolerance under which it can proceed (if loose, this gives an equivalent functionality).

Could go into parser 6 changes.

LatticeOpt constrains enhancement

Dear developers, thank you for this fast tool! I have a question about lattice optimization constraints: is it possible to keep two unit cell parameters equal to each other during geometry optimization process, like a=b or beta=gamma?

Dumping HSD with large text blocks fails

When compiled with certain compilers (especially ifort) the executables fail to dump the hsd tree when it contains large text blocks (huge geometry, Hessian matrix, etc.) due to record overflow.

Workaround for the moment: Disable dumping of HSD and XML, either via input options WriteHsdInput and WriteXmlInput or if not available (as for modes) uncomment the dumpHSD() and dumpHSDAsXML() calls in the code.)

Typo in INSTALL.rst

In the get_opt_external section: ./utils/get_opt_externals slako should be ./utils/get_opt_externals slakos

calculating ESP using DFTB+

I would like to ask question if you please. Can I calculate ESP potential using DFTB+ and how good is it compared to normal DFT?

Mistake in transport tutorial

IN version 19.1 onwards NGEF transport functionality has been added, and for the same following tutorial has been suggested:

https://www.dftbplus.org/fileadmin/DFTB-Plus/public/tutorials/cecamhp/html/transport.html#non-scc-pristine-armchair-nanoribbon

However thee is a mistake in the tutorial I think. When I run its files i get the following error:

ERROR!
-> Eigensolver incompatible with transport calculation (GreensFunction or TransportOnly required)
Path: dftbplusinput/Hamiltonian/DFTB/Solver/RelativelyRobust

The reason is that its still using Relatively Robust solver as the following block is given in the tutorial:

Hamiltonian = DFTB {
SCC = No
MaxAngularMomentum = {
  C = "p"
  H = "s"
 }

SlaterKosterFiles = Type2FileNames {
  Prefix = "../../../sk/"  # To be substituted with the path to
                           # SK parameters on your local disk
  Separator = "-"
  Suffix = ".skf"
}

Eigensolver = TransportOnly{}
}

tries to assign the TransportOnly solver using keyword EigenSolver.
However the manual says that solver is to be selected by keyword Solver. Hence Eigensolver = TransportOnly{} should be Solver = TransportOnly{}. Doing so indeed solved my problem.

utils/build/update_release script with old git

git version 1.7.1 is too old to support several of the flags this script uses. By version 1.9.1 this is resolved.

Temporary work around, if the file "RELEASE" does not exist in the top directory:
echo "UNOFFICIAL BUILD" > RELEASE

(dftb plus-18.2) continuously growing memory usages with MPI ?

Dear
Using dftb-plus-18.2 in Ubuntu 18.04 (intel compiler+mkl, 2019),
the program requests more and more memory as processing iterative Driver function.
Can anyone give suggestion to deal this issue?

============== .hsd setup ==============
Geometry = GenFormat {
#<<<"geo_end.gen"
<<<"geometry.gen"
}
Driver = VelocityVerlet {
TimeStep [Femtosecond] = 1.0
MovedAtoms = H C S
Thermostat = Berendsen {
CouplingStrength = 0.1
AdaptFillingTemp = Yes
Temperature [Kelvin] = TemperatureProfile {
constant 500 300.0
linear 500 500.0
exponential 1000 300.0
constant 500 300.0
}
}
MDRestartFrequency = 10
}

Hamiltonian = DFTB {
SCC = Yes
SCCTolerance = 1e-5
MaxSCCIterations = 500
SlaterKosterFiles = Type2FileNames {
Prefix = "//home/sunta/bin/dftbplus-18.2/slakos/auorg-1-1/"
Separator = "-"
Suffix = ".skf"
}
MaxAngularMomentum {
H = "s"
C = "p"
S = "p"
Au = "d"
}
KPointsAndWeights = SupercellFolding {
1 0 0
0 1 0
0 0 1
0.0 0.0 0.0
}
InitialCharges = {
AtomCharge = {
Atoms = S
ChargePerAtom = 0.0
}
}
Eigensolver = DivideAndConquer{}
Mixer = Anderson{
MixingParameter = 0.05
Generations = 6
}
}
Parallel{
Groups = 1
UseOmpThreads = No
}
Analysis={
MullikenAnalysis = Yes
}
ParserOptions {
ParserVersion = 6
}

Testing File Parsers

I apologize if this issue does not follow some format template, I did not see mention of one in the developers guide. I am running into an error when parsing an .hsd input file and was wanting to try to debug it. I was wondering if there was already some unit tests that were made for .hsd files so I could familiarize my self with parser functions?

Compiling error

Dear developers,
When I tried compile the dftbplus-18.2, there encountered an error:

[root@localhost dftbplus-18.2]# make
mkdir -p /home/room/cns/DFTB/dftbplus-18.2/_build
[ -r /home/room/cns/DFTB/dftbplus-18.2/RELEASE ] && cp -a /home/room/cns/DFTB/dftbplus-18.2/RELEASE /home/room/cns/DFTB/dftbplus-18.2/_build/RELEASE
|| /home/room/cns/DFTB/dftbplus-18.2/utils/build/update_release /home/room/cns/DFTB/dftbplus-18.2/_build/RELEASE
|| echo "(UNKNOWN RELEASE)" > /home/room/cns/DFTB/dftbplus-18.2/_build/RELEASE
mkdir -p /home/room/cns/DFTB/dftbplus-18.2/_build/external/xmlf90
make -C /home/room/cns/DFTB/dftbplus-18.2/_build/external/xmlf90
-f /home/room/cns/DFTB/dftbplus-18.2/external/xmlf90/make.dpbuild
ROOT=/home/room/cns/DFTB/dftbplus-18.2 BUILDROOT=/home/room/cns/DFTB/dftbplus-18.2/_build
make[1]: 进入目录“/home/room/cns/DFTB/dftbplus-18.2/_build/external/xmlf90”
touch -r libxmlf90.a /home/room/cns/DFTB/dftbplus-18.2/_build/external/xmlf90/BUILD_TIMESTAMP
make[1]: 离开目录“/home/room/cns/DFTB/dftbplus-18.2/_build/external/xmlf90”
mkdir -p /home/room/cns/DFTB/dftbplus-18.2/_build/prog/dftb+
make -C /home/room/cns/DFTB/dftbplus-18.2/_build/prog/dftb+ -f /home/room/cns/DFTB/dftbplus-18.2/prog/dftb+/make.build
ROOT=/home/room/cns/DFTB/dftbplus-18.2 BUILDROOT=/home/room/cns/DFTB/dftbplus-18.2/_build
make[1]: 进入目录“/home/room/cns/DFTB/dftbplus-18.2/_build/prog/dftb+”
/home/room/cns/DFTB/dftbplus-18.2/external/fypp/bin/fypp -DDEBUG=0 -DRELEASE="'release 18.2'" -DWITH_ARPACK -I/home/room/cns/DFTB/dftbplus-18.2/prog/dftb+/lib_common/ -I/home/room/cns/DFTB/dftbplus-18.2/prog/dftb+/include /home/room/cns/DFTB/dftbplus-18.2/prog/dftb+/lib_common/timerarray.F90 > timerarray.f90
gfortran -O2 -funroll-all-loops -fopenmp -I/home/room/cns/DFTB/dftbplus-18.2/_build/external/xmlf90 -o timerarray.o -c timerarray.f90
timerarray.f90:37.39:

character(:), allocatable :: header
                                   1

错误: Deferred-length character component 'header' at (1) is not yet supported
timerarray.f90:37.39:

character(:), allocatable :: header
                                   1

错误: Deferred-length character component 'header' at (1) is not yet supported
timerarray.f90:37.39:

character(:), allocatable :: header
                                   1

错误: Deferred-length character component 'header' at (1) is not yet supported
timerarray.f90:37.39:

character(:), allocatable :: header
                                   1

错误: Deferred-length character component 'header' at (1) is not yet supported
timerarray.f90:92.21:

call this%reset()
                 1

错误: (1)处的‘reset’应该是一个 SUBROUTINE
make[1]: *** [timerarray.o] 错误 1
rm timerarray.f90
make[1]: 离开目录“/home/room/cns/DFTB/dftbplus-18.2/_build/prog/dftb+”
make: *** [dftb+] 错误 2

Please give me a suggest,Thanks.

dftb+ development version Compilation error

Compiling dftb+ development version, make completed successfully. But the following error occur, while execute the following command.
$ make test

make[2]: Entering directory /opt/apps/dftbplus-master/_build/external/mpifx' make[2]: /opt/apps/dftbplus-master/external/mpifx/origin/src/Makefile.lib: No such file or directory make[2]: *** No rule to make target /opt/apps/dftbplus-master/external/mpifx/origin/src/Makefile.lib'. Stop.
make[2]: Leaving directory /opt/apps/dftbplus-master/_build/external/mpifx' make[1]: *** [libmpifx] Error 2 make[1]: Leaving directory /opt/apps/dftbplus-master/_build/external/mpifx'
make: *** [external_mpifx] Error 2

Please help me on this..

Please update xmlf90 to the latest version

The latest version is 1.5.4: https://launchpad.net/xmlf90/

It doesn't install include/m_strings.mod that dftbplus expects.
I have the xmlf90 package installed, and would rather use it instead of the bundled version.

There is a policy on FreeBSD that external libs should be used instead of the bundled ones.
The reason for this policy is that if security problems are found in the libs that are bundled, it is much harder to find/fix them.

Thank you,
Yuri

results.tag file and ASE

Hello.

This issue is about writing output files for DFTB+ version 18.1.
I noticed the file "results.tag" has none information about the total energy of the system. Moreover, older versions of DFTB+ (1.2) writes more information than the 18.1 does. This makes DFTB+ unfeasible to use as calculator in ASE.
When I tried to run the examples provided in the ASE's webpage. I couldn't finish it. The error "Problem in reading energy" appears in every single attempt. But when I changed the DFTB+ executable for the version 1.2 it ran without any errors.

I would be very glad if this issue could be fixed. our group develop Slater Koster files for a series of materials, and using DFTB+/ASE is crucial.

Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.