Giter VIP home page Giter VIP logo

grasp's Introduction

GRASP - The General-purpose Relativistic Atomic Structure Package

Tests

The General-purpose Relativistic Atomic Structure Package (GRASP) is a set of Fortran 90 programs for performing fully-relativistic electron structure calculations of atoms.

Installation

Please note: The installation instructions here are for the development version on the master branch.

To install the latest published release (2018-12-03), go to the "Releases" page, download the tarball from there and refer to the instructions in the README in the tarball.

To compile and install GRASP, first clone this Git repository:

git clone https://github.com/compas/grasp.git

There are two ways to build GRASP: either via CMake or via the Makefiles in the source tree. Either works and you end up with the GRASP binaries in the bin/ directory.

CMake is the recommended way to build GRASP. The Makefile-based workflow is still there to make smoother to transition from Makefiles to a modern build system.

CMake-based build

The first step with CMake is to create a separate out-of-source build directory. The configure.sh script can do that for you:

cd grasp/ && ./configure.sh

This will create a build/ directory with the default Release build configuration. However, configure.sh is just a simple wrapper around a cmake call and if you need more control over the build, you can always invoke cmake yourself (see CMake documentation for more information).

To then compile GRASP, you need to go into the out-of-source build directory and simply call make:

cd build/ && make install

Remarks:

  • Running make install instructs CMake to actually install the resulting binaries into the conventional bin/ directory at the root of the repository.

    When you run just make, the resulting binaries will end up under the build/ directory (specifically in build/bin/). This is useful when developing and debugging, as it allows you to compile many versions of the binaries from the same source tree with different compilation options (e.g. build with debug symbols enabled) by using several out of source build directories.

  • With CMake, GRASP also supports parallel builds, which can be enabled by passing the -j option to make (e.g. make -j4 install to build with four processes).

  • The CMake-based build allows running the (non-comprehensive) test suite by calling ctest in the build/ directory. The configuration and source files for the tests are under test//

Makefile-based build

The legacy Makefile-based build can be performed by simply calling the make in the top level directory:

make

In this case, the compilation of each of the libraries and programs happens in their respective directory under src/ and the build artifacts are stored in the source tree. The resulting binaries and libraries will directly get installed under the bin/ and lib/ directories.

To build a specific library or binary you can pass the path to the source directory as the Make target:

# build libmod
make src/lib/libmod
# build the rci_mpi binary
make src/appl/rci90_mpi

Note that any necessary library dependencies will also get built automatically.

WARNING: the Makefiles do not know about the dependencies between the source files, so parallel builds (i.e. calling make with the -j option) does not work.

Customizing the build

By default the Makefile is designed to use gfortran. The variables affecting GRASP builds are defined and documented at the beginning of the Makefile.

For the user it should never be necessary to modify the Makefile itself. Rather, a Make.user file can be create next to the main Makefile where the build variables can be overridden. E.g. to use the Intel Fortran compiler instead, you may want to create the following Make.user file:

export FC = ifort
export FC_FLAGS = -O3 -save -mkl=sequential
export FC_LD =
export FC_MPI = mpiifort
export OMPI_FC=${FC}

where -mkl=sequential should be set depending on what version of ifort you have access to.

Alternatively, to customize the GNU gfortran build to e.g. use a specific version of the compiler, you can create a Make.user file such as

export FC = gfortran-9
export FC_FLAGS = -O3 -fno-automatic
export FC_LD =
export FC_MPI= mpifort
export OMPI_FC=${FC}

To set up a linker search path for the BLAS or LAPACK libraries you can set FC_LD as follows:

export FC_LD = -L /path/to/blas

The repository also contains the Make.user.gfortran and Make.user.ifort files, which can be used as templates for your own Make.user file.

About GRASP

This version of GRASP is a major revision of the previous GRASP2K package by P. Jonsson, G. Gaigalas, J. Bieron, C. Froese Fischer, and I.P. Grant Computer Physics Communication, 184, 2197 - 2203 (2013) written in FORTRAN 77 style with COMMON and using Cray pointers for memory management. The present version is a FORTRAN95 translation using standard FORTRAN for memory management. In addition, COMMONS have been replaced with MODULES, with some COMMONS merged. Some algorithms have been changed to improve performance for large cases and efficiently.

The previous package, was an extension and modification of GRASP92 by Farid Parpia, Charlotte Froese Fischer, and Ian Grant. Computer Physics Communication, 94, 249-271 (1996).

This version of GRASP has been published in:

C. Froese Fischer, G. Gaigalas, P. Jönsson, J. Bieroń, "GRASP2018 — a Fortran 95 version of the General Relativistic Atomic Structure Package", Computer Physics Communications, 237, 184-187 (2018), https://doi.org/10.1016/j.cpc.2018.10.032

Development of this package was performed largely by:

email
Charlotte Froese Fischer [email protected]
Gediminas Gaigalas [email protected]
Per Jönsson [email protected]
Jacek Bieron [email protected]

Supporters include:

email
Jörgen Ekman [email protected]
Ian Grant [email protected]

The GitHub repository is maintained by:

email
Jon Grumer [email protected]

Please contact the repository manager should you have any questions with regards to bugs or the general development procedure. Contact the leading developer for specific questions related to a certain code.

Structure of the Package

The package has the structure shown below where executables, after successful compilation, reside in the bin directory. Compiled libraries are in the lib directory. Scripts for example runs and case studies are in folders under grasptest. Source code is in the src directory and divided into applications in the appl directory, libraries in the lib directory and tools in the tool directory.

   |-bin
   |-grasptest
   |---case1
   |-----script
   |---case1_mpi
   |-----script
   |-----tmp_mpi
   |---case2
   |-----script
   |---case2_mpi
   |-----script
   |-----tmp_mpi
   |---case3
   |-----script
   |---example1
   |-----script
   |---example2
   |-----script
   |---example3
   |-----script
   |---example4
   |-----script
   |-------tmp_mpi
   |---example5
   |-----script
   |-lib
   |-src
   |---appl
   |-----HF
   |-----jj2lsj90
   |-----jjgen90
   |-----rangular90
   |-----rangular90_mpi
   |-----rbiotransform90
   |-----rbiotransform90_mpi
   |-----ris4
   |-----rci90
   |-----rci90_mpi
   |-----rcsfgenerate90
   |-----rcsfinteract90
   |-----rcsfzerofirst90
   |-----rdensity
   |-----rhfs90
   |-----rhfszeeman95
   |-----rmcdhf90
   |-----rmcdhf90_mpi
   |-----rmcdhf90_mem
   |-----rmcdhf90_mem_mpi
   |-----rnucleus90
   |-----rtransition90
   |-----rtransition90_phase
   |-----rtransition90_mpi
   |-----rwfnestimate90
   |-----sms90
   |---lib
   |-----lib9290
   |-----libdvd90
   |-----libmcp90
   |-----libmod
   |-----librang90
   |-----mpi90
   |---tool

Program Guide and Compilation

The software is distributed with a practical guide to GRASP2018 in PDF-format (click here to download). The guide, which is under Creative Commons Attribution 4.0 International (CC BY 4.0) license, contains full information on how to compile and install the package.

Acknowledgements

This work was supported by the Chemical Sciences, Geosciences and Biosciences Division, Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy who made the Pacific Sierra translator available and the National Institute of Standards and Technology. Computer resources were made available by Compute Canada. CFF had research support from the Canadian NSERC Discovery Grant 2017-03851. JB acknowledges financial support of the European Regional Development Fund in the framework of the Polish Innovation Economy Operational Program (Contract No. POIG.02.01.00-12-023/08).

Copyright & license

The code in this repository is distributed under the MIT license. The accompanying guide "A practical guide to GRASP2018" is licensed separately under the CC-BY-4.0 (Creative Commons Attribution 4.0 International) license.

grasp's People

Contributors

cffischer avatar jfbabb avatar jongrumer avatar mortenpi avatar sachaschiffmann avatar wenxianli avatar yanting126 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grasp's Issues

Running MPI tests on ubuntu-latest (20.04) CI fails

Changing the CI image to ubuntu-latest or ubuntu-20.04 works great, except for the MPI tests. Likely due to a version change in MPI between 18.04 and 20.04.

Would be nice to debug this at some point and change the images back to 20.04.

Modernize the installation procedure

Background

I guess many of us are thinking about how we could modernize/generalize the installation procedure of the codes. As an example it would be nice to have a system independent compilation configuration instead of compiler and library specific environment files as it is set up now.

I thought we could have such a discussion in this issue.

Collected thoughts and ideas

cmake

Switch to cmake in favour of make (edit: actually, as pointed out by @jagot below, cmake should be thought of more like a Makefile generator) to have a single platform and compiler independent configuration file instead of pre-sourcing compiler specific env files to set env flags. I believe both @mortenpi and @jagot have implemented this at some point? The question is how complicated we want to make things, maybe learning git/github is enough for the developers at the moment. But we should keep this in mind.

The GRASP env variable

@mortenpi suggested that, in the present scheme with the system dependent environment files, we could switch the export GRASP="${PWD}" variable to

export GRASP="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

in order to support sourcing of the env files from a directory different from the GRASP root dir (see here for details). Not sure if this is too complicated for the general user to understand. Let us know what you think, or if you have a simpler solution.
EDIT
I can't get the above bash command, and variants of it, to work under both Linux and Mac environments, mainly due to slightly different shell implementations. To get the path of the script location one can instead go for python, which should be more consistent, through the os.path.realpath function, e.g.

import os
print(os.path.dirname(os.path.realpath(__file__)))

RCI contribution of H

The RCI module adds QED corrections like the Breit-interaction and self energy. But it also has the option "Include contribution of H (Transverse)?" to which I wasn't able to find more information in the literature about Grasp*.

What does it mean? Is it connected to the photon transverse or something different? I would appreciate some related literature and/or formula if possible.

Thanks a lot,
Anja

*(the referred to literature Advanced multiconfiguration methods for complex
atoms: I. Energies and wave functions
; An Introduction to Relativistic Theory as Implemented in GRASP )

Changing parameters in the package

Hi.

I am trying to change some of the parameters in the package.

I have successfully recompiled with new values of NNNP and NNN1 as described in the manual chapter 2.6.

However, I would also like to change the parameters RNT and H. I see that this can be done by running each program with non-default options, but that becomes tedious. Is there any quick way to fix the default values of these parameters as well?

Thank you,
Martin

Request for new features of rtabtrans1+rtabtrans2

  1. We should add the possibility to print level energies in the line list produced by the ratbtrans1 + rtabtrans2 codes - for these codes it would also be good to have the possibility to print the lifetimes of the levels in the list, in order to estimate the radiative broadening of the level.

    Connected to PR #85 .

  2. We should also have a clean way of rescaling the transition energies to experimental energy levels. I have a hacked way of doing this by adding experimental/NIST energies to a level list produced by rlevels, which is then read and rescaled by rtabtrans1. But it's not really ready to be submitted. Maybe someone else have a more production ready version? Or a more clever technique? I think perhaps @WenxianLi might have some code that does this? :)

/Jon

Parity issue in GRASP

The parity issue that was first observed by Kai and traced to the SETCSLL.f90 routine in LIB9290 is present also in other locations of GRASP2018. It was found to be present in RMIXACCUMULATE.f90.

GRASP2018 is a parameterized software package. The default maximum number of orbitals is 127, so
we require NCLOSED + NPEEL <= 127. Actually, when reading a CSF file, it is easy to determine the
both of these variables and check this constraint. But, more important, NPEEL determines the maximum length of the "lines" that define the CSFs. It is not necessary to simply chose a number.

I have not analyzed all or RMIXACCUMULATE.f90 but the problem disappeared by replacing all dimensions "100" by "200". I suspect the same to be true also for RMIXEXTRACT.

"Maximum tabulation point exceeds dimensional limit (currently 590)"

Hello all,
I am running the fine structure spitting calculation for I8+ (Z-53, 45 electrons, 4d9 ground state).
but when I am adding SD excitations from 4d(9,*) to 4f, 5s, 5p, 5d, 5f, 5g states. while running "rmcdhf" , it is showing following error.

Lagrange multipliers:

                1s    5s                NaN
                2s    5s                NaN
                2p-   5p-               NaN
                2p    5p                NaN
                3s    5s                NaN
                3p-   5p-               NaN
                3p    5p                NaN
                3d-   5d-               NaN
                3d    5d                NaN
                4s    5s                NaN
                4p-   5p-               NaN
                4p    5p                NaN
                4d-   5d-               NaN
                4d    5d                NaN
                4f-   5f-               NaN
                4f    5f                NaN
                                         Self-            Damping
Subshell    Energy    Method   P0    consistency  Norm-1  factor  JP MTP INV NNP

 IN: maximum tabulation point exceeds
  dimensional limit (currently 590);
  radial wavefunction may indicate a
  continuum state.

**maybe doubly excited levels have the states above the ionization. is GRASP is not able to perform energy calculation of those states".

bug in the generation of csf's using rcsfgenerate

@gaigalas found a serious bug in rcsfgenerate:

Email from Gediminas [2019-10-18]
It seems that there are some bug in the rcsfgenerate program.

Here is the scripts for rcsfgenerate program:

*
5
4f(11,10)6s(1,*)
 
6s, 6p, 4d, 4f
18,18
2
n

and script for old csl program

y
1s 2s 2p 3s 3p 3d 4s 4p 4d 5s 5p
4f 6s 6p
n
y
4f(10)
4f(11)6s(1)
9
 
2

The csl program reproduce 43 CSFs. This is correct.
But rcsfgenerate reproduce 292 CSFs. This is NOT CORRECT !!! because we have 9 electrons in the 4f shell (we was asking for minimum 10 electrons).

Please be very careful with the using rcsfgenerate program for investigation of lanthanides !!!

How to solve the problem "Angular file not available" in rtransition program

Hi everyone,

I am calculating a case very similar to the grasp2018-manual's fourth example "3l3l' states in Fe XV using MPI". It has two even and two odd configurations. The MCDHF and RCI calculations were performed well. But when I try to run the rtransition program after rbiotransform, it always told me "Angular file not available". I have tried to delete the *.bw and *.cbm file (no file have extension T) and perform the calculation again. But it still the same.

Could you please tell me how to solve this problem?

Segmentation fault - invalid memory reference

Good day.

I am a humble student doing computations on HFS in Pb 207. I have built up to an active set 10sp7d6f with only the valence 6sp active. Then I have started opening the core, one subshell at the time. 5spd4spd went well. When I get to opening 4f, rangular crashes while sorting coefficients in the final block. I get the error

Program received signal SIGSEGV - Segmentation fault - invalid memory reference.

I see that someone has had a similar issue with 4f on here:

#43

So I decided to skip 4f, and move on to 3d. Then I get the same issue.

I do not feel like I've added unnecessary correlation, as every step so far has caused changes in A-value of order >0.1%. Usually 1%.

The number of CSFs in block 2, which crashes, is 426k. This should not be too much(?).

I ran sequential version on a HPC cluster with 192Gb RAM. I should have 1T of storage, and <200Gb is used.

Any ideas on how I ought to proceed?

Thank you,
Martin

grasp2018 branch and related README updates

With the repository renamed, we need to create the grasp2018 branch at some point and also update the README with instructions on how users can download the different versions. Opening this so that we wouldn't forget to do this.

Example1 not working

I am starting with GRASP now so excuse me if i am saying something absurd.
After i have installed GRASP i ran the script of example1 : ./script_ex1 >& output &
After that i ran the command rmcdhf but i am getting the following error

Method 2 unable to solve for 3s orbital
Iteration number: 9, limit: 9
Present estimate of P0; 0.22117473513882D+00
Present estimate of E(J): 0.34218606921520D+00, DELEPS: 0.57599776608896D-02
Lower bound on energy: 0.91488178055281D-01, upper bound: 0.72009664241632D+00
Join point: 315, Maximum tabulation point: 372
Number of nodes counted: 2, Correct number: 2
Sign of P at first oscillation: -1.

Failure; equation for orbital 3s could not be solved using method 2

****** Error in SUBROUTINE IMPROV ******
Convergence not obtained

Readme/installation improvement

We need to add installation (especially CMake) instructions on:

  • how to use non-default compiler options -FFLAGS should be used throughout instead of FC_FLAGS which is not recognized by CMake
  • how to build individual libs or appl
  • non-default Lapack/Blas libraries (suggested by @cffischer)
  • more?

And also some level of instructions on how to work with CMake, i.e. how to add new files/apps.

Large case version

In order to solve for (optimise on) all eigenstates belonging to a group of configurations having more than "a few" f-electrons (-holes) - such as eg Dy I, II or Gd I, II - we have to modify many of the main routines (rmcdhf, jj2lsj, rlevels, rci, rbiotransform, rtransition etc...).

Typically the codes (in particular the eigenvalue solvers) are not set up to deal with more than 999 eigenstates / block, limits which set by locally hardcoded parameters in the codes. This together with various formatting issues (eg. that numbers become to long, missing high J-value labels etc) causes the codes to crash for many large cases.

The main motivation is to be able to perform calculations on configurations of the Lanthanides (open 4f-systems).

Gediminas @gaigalas has extended jj2lsj and the codes which depend on its output (rlevels + transition codes). I have managed to successfully optimise on blocks of about 5000 eigenstates using a modified version of rmcdh (mainly CNUM/CLEVELS and N1000).

In addition, and from a more long-term perspective, Per @tspejo suggested to dig up the FEAST diagonalisation method (to replace Davidson/Lapack) which was implemented in an earlier version of GRASP by Per Andersson. We should try to get this version running. For now I keep it in my personal repo: https://github.com/jongrumer/grasp_dev_feast

Update: Andreas Stathopoulos suggests, via @cffischer, that we switch to his more robust PRIMME routine.

Update: I've started adding Gediminas and my own smaller mods to build a well documented large case version on a fork of the grasp2018 repo under my own user: https://github.com/jongrumer/grasp2018/tree/jg/large
I'll keep it like that for now. Feel free to join in if you want to!

new rwfnestimate

Hello everyone,

I install the latest grasp . I see 'rwfnestimate' add a new choice.

 Read subshell radial wavefunctions. Choose one below
     1 -- GRASP92 File
     2 -- Thomas-Fermi
     3 -- Screened Hydrogenic
     4 -- Screened Hydrogenic [custom Z]

I want to ask what does the fourth choice mean?
Can you offer me the related formula or literature?

Thanks

Have a nice day!

Yenoch

MPI Runtime killed on signal 9 in rangular_mpi

Hi,
I'm trying to calculate a multi-reference system, which have near (sometimes over) one million states in one blocks. When I use mpi runing the rangular_mpi program, it will always report as follow.

Sorting 5039795 T coefficients ... 31
Sorting 599162331 V(k=1) coefficients ... 33
Sorting 91967402 V(k=0) coefficients ... 32
Sorting 599092522 V(k=1) coefficients ... 33

mpirun noticed that process rank 2 with PID 0 on node qy-PC exited on signal 9 (Killed).

Please help me out of this problem.
Best wishes
Yenoch

Error while running rcsfexcitation

I need to make calculations for heavy atoms, so decided to go through examples first.
When I run "rcsfexcitation", at the step >> Give orbital set, I enter what is in the GraspManual, for example 1 it's 2s, 2p, however, I get the following "Orbitals should be right justified and occupy three positions, redo!
Give orbital set".
Can I overcome it, or that's a bug?

jj2lsj's parity limitation

As highlighted in a recent email by @cffischer, jj2lsj crashes on lists with both parities present, and does not print out any warnings ( I think?). Only @gaigalas knows how hard it would be to extend the routine, and how much time it would take. We all have many things to do.

I would consider this "enhancement" having relatively low priority, but should nevertheless be adressed at some point.

Maybe a quick solution would be to add a check in jj2lsj and stop with an informative warning text for such lists?

How to improve the usage of computer on rmcdhf_mpi calculation?

Hi everyone.

Last time I asked a question about rangular calculation, @jongrumer suggested me to expand the memory and I have expanded it to 256GB. But when I use the rmcdhf_mpi program, I find the usage of the computer is very low. Both the CPU and memory are not fully used. The top command shows below:

top - 21:52:55 up 15:20,  2 users,  load average: 12.14, 10.82, 6.61
Tasks: 321 total,   2 running, 319 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.7 us,  0.5 sy,  0.0 ni, 49.2 id, 46.6 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 26377857+total,  6408952 free, 20250176 used, 23711945+buff/cache
KiB Swap: 16383996 total, 16273660 free,   110336 used. 24256968+avail Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND         
 49406 grasp20+  20   0 1164420 725968   5716 D   5.6  0.3   9:43.40 rmcdhf_mpi      
 49402 grasp20+  20   0 1164632 727820   7320 D   5.3  0.3   9:42.68 rmcdhf_mpi      
 49426 grasp20+  20   0 1164012 725900   6008 D   5.3  0.3   9:44.11 rmcdhf_mpi      
 49404 grasp20+  20   0 1164040 725600   5712 R   5.0  0.3   9:43.59 rmcdhf_mpi      
 49411 grasp20+  20   0 1163208 724724   5672 D   4.6  0.3   9:43.65 rmcdhf_mpi      
 49418 grasp20+  20   0 1163524 725100   5724 D   4.6  0.3   9:40.46 rmcdhf_mpi      
 49421 grasp20+  20   0 1164064 725668   5724 D   3.6  0.3   9:42.43 rmcdhf_mpi      
 49401 grasp20+  20   0 1163424 729668  10616 D   3.3  0.3   1:22.29 rmcdhf_mpi      
 49408 grasp20+  20   0 1163784 725328   5712 D   3.3  0.3   9:42.92 rmcdhf_mpi      
 49403 grasp20+  20   0 1163336 726488   7296 D   3.0  0.3   9:42.50 rmcdhf_mpi      
 49415 grasp20+  20   0 1163572 725832   6412 D   3.0  0.3   9:40.70 rmcdhf_mpi      
 49405 grasp20+  20   0 1163268 725652   6548 D   2.6  0.3   9:43.03 rmcdhf_mpi      
 11545 grasp20+  20   0  162244   2500   1564 R   1.0  0.0   3:32.81 top             

Is there anything I can do to improve the usage of the computer?

Have a nice day!

Yenoch

rcsfinteract "Different order of Peel orbitals"

Hi, I'm currently learning GRASP. Doing a project on HFS in Pb 207.

The 1st virtual layer consists of the 7sp6d5fg orbitals. I can allow for substitutions to 7sp6d, but when I open the 5f orbital I get the following error when running rcsfinteract:

Error in input !!!
Different order of Peel orbitals
ERROR STOP in files rcsfmr.inp and rcsf.inp

The peel subshells in the rcsfgenerate files look like this:

rcsfmr.inp:
3d- 3d 4s 4p- 4p 4d- 4d 4f- 4f 5s 5p- 5p 5d- 5d 6s 6p- 6p

rcsf.inp (works):
3d- 3d 4s 4p- 4p 4d- 4d 4f- 4f 5s 5p- 5p 5d- 5d 6s 6p- 6p 6d- 6d 7s 7p- 7p

rcsf.inp (fails):
3d- 3d 4s 4p- 4p 4d- 4d 4f- 4f 5s 5p- 5p 5d- 5d 5f- 5f 6s 6p- 6p 6d- 6d 7s 7p- 7p

Presumable rcsfinteract fails because 5f- 5f are listed before 6s 6p- 6p. What should I do to get around this?

Thanks,
Martin

Gfortran -- compiler optimization

In all our examples we use the compiler option -O2. A few simple tests have shown that the option -O3 is significantly better in some cases. I have not tested this extensively, but maybe someone could confirm this.

Installation of the package under csh

I want to install grasp2018 there but in a csh shell instead of bash shell. Therefore I can't follow the manual and use the commands

"source ./make_environment_gfortran", ...

and I do not know how could I install GRASP with this csh shell.

Do you know how could I intall GRASP under csh shell?

Thank you in advanced.

Helena

rcsfinteract triple excitations

Hi.

I am trying to expand a calculation to include triple excitations.

rcsfinteract seems to remove all CSFs corresponding to such excitations.

I think it is not very feasible to run without rcsfinteract as the number of CSFs increases very rapidly with SDT excitations.

Any ideas what I am missing here?

Thank you.
Martin

rcsfgenerate silently omitting configurations for j >= 7/2 subshells

I was trying to build a jj coupling programe with the same output like GRASP rcsfgenerate90.
For subshells with j >= 7/2 the seniority number was added and rcsfgenerate90 output does not have some configurations for example consider we have to get all jj coupling of configuration 5g(5) then I got 639 levels while rcsfgenerate90 have 254 levels. Some of the configurations are missing like ( 5g-( 2) 5g ( 3) with J/p = 1/2+).

I attached the output of both rcsfgenerate90 (as rcsf.txt) and my code (as csf.txt). The output of both runs appear in output.txt file.

Why rcsfgenerate90 miss some configurations?

output.txt
rcsf.txt
csf.txt

rmcdhf_mpi performance issue

Hi.

I am experiencing some poor performance when running rmcdhf_mpi.

I run on a single node with multiple tasks per node. Performance seems worse the more tasks I reserve (not the case for rangular and rci). I have for example recorded 61 min/iteration when using 10 tasks, and then <18 min/iteration when using 4 tasks in an otherwise identical run.

I have set

export MPI_TMP="/cluster/home/username/Grasp/workdir/tmp_mpi"

and the directory "tmp_mpi" is created within the working directory with the subdirectories "000", "001".. etc.

The poorer performance with more processing power leads me to believe that the read speed is the limiting factor.

Any ideas what I can do to fix this?

Thanks,
Martin

Remove -fno-automatic in CMakeLists.txt

It conflicts with ifort flags, but there does not seem to be any way to remove it. We can just add to it with -DCMAKE_Fortran_FLAGS=....

As per @jongrumer's suggestion, adding an if statement around it should be sufficient to work around this.

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Hi,
I am doing calculations on 4f^n6s^n systems, and in the end when I run jj2lsj I get the following error.

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0 0x7f18add6832a
#1 0x7f18add67503
#2 0x7f18ad5fcf1f
#3 0x7f18adedcb8b
#4 0x7f18adeddcbb
#5 0x7f18adee10a8
#6 0x7f18adee272c
#7 0x564dc56bc3ac
#8 0x564dc56be025
#9 0x564dc56ad3ee
#10 0x7f18ad5dfb96
#11 0x564dc56ad419
#12 0xffffffffffffffff
Segmentation fault (core dumped)

Please help me out of this error.
Regards
Ahmed

convergence issue

I want to determine the wavefunctions and energy levels of Argon configurations
1s1 2s2 2p6 3s2 3p6 6p1
1s1 2s2 2p6 3s2 3p6 7p1
1s1 2s2 2p6 3s2 3p6 8p1
1s1 2s2 2p6 3s2 3p6 9p1
1s2 2s2 2p6 3s1 3p5 6p2
1s2 2s2 2p6 3s1 3p5 7p2
1s2 2s2 2p6 3s1 3p5 8p2
1s2 2s2 2p6 3s1 3p5 9p2
my target configurations are the first four configurations (with J=1 and odd parity) and added the last 4 to make it easier to use HF . I have used different techniques such as
1- increasing the charge of the isodata file
2- changing the damping factor
3- mix the core with different configurations
3- using HF

but with no solution. I will be very thankfull if anyone could help to get the energy levels of the first four configurations. I am attaching the GRASP file for help
0-file.txt

Difference in allowed ICCUT values: rangular vs rangular_mpi

For Breit-Wigner SCF/CI+RMBPT calculations, there's a difference between allowed ICCUT values (zero-order space block sizes) in rangular and rangular_mpi. The latter does not accept ICCUT blocks of size 1, while the first serial version does. Why zero-order blocks of unit size wouldn't be ok seems weird, and I guess this was fixed/realized at some point, but only implemented in the serial version - maybe related to the EAL modifications?

I'm not sure if I can just comment out the ICCUT <= 1 if statement in the MPI code, like it is done in the serial, or if doing so would also affect rmcdhf. I'll do some testing.

Maybe @gaigalas @tspejo or @bieronjacek knows the answer to this directly?

The relevant code sections are in getinf.f90. The serial version looks:

 44 !   Determine the physical effects specifications
 45 !
 46 !     Commenting out the EAL option
 47 !     IF (NDEF .NE. 0) THEN
 48 !        WRITE  (istde,*) 'Generate MCP coefficients only for'
 49 !    & , ' diagonal matrix elements? '
 50 !        WRITE (istde,*) '(This is appropriate to (E)AL calculation):'
 51 !        DIAG = GETYN ()
 52 !     ELSE
 53 !        DIAG = .FALSE.
 54 !     ENDIF
 55       DIAG = .FALSE.
 56       IF (DIAG) THEN
 57          LFORDR = .FALSE.
 58          do i = 1,100
 59             ICCUT(i) = 0
 60          end do
 61       ELSE
 62          IF (NDEF .NE. 0) THEN
 63 !            WRITE (istde,*) 'Treat contributions of some CSFs', &
 64 !                            ' as first-order perturbations?'
 65 !            LFORDR = GETYN ()
 66             LFORDR = .TRUE.
 67          ELSE
 68             LFORDR = .FALSE.
 69          ENDIF
 70          IF (LFORDR) THEN
 71             WRITE (istde,*) 'The contribution of CSFs 1 -- ICCUT will',&
 72                             ' be treated variationally;'
 73             WRITE (istde,*) 'the remainder perturbatively; enter ICCUT:'
 74             do i = 1,nblock
 75                write(istde,*) 'Give ICCUT for block',i
 76                READ *, ICCUT(i)
 77                write(739,*) ICCUT(i), '! ICCUT FOR BLOCK',i
 78 !    1          READ *, ICCUT(i)
 79 !               IF ((ICCUT(i).LE.1).OR.(ICCUT(i).GE.ncfblk(i))) THEN
 80 !                  WRITE (istde,*) 'GETINF: ICCUT must be greater than 1', &
 81 !                                  ' and less than ',ncfblk(i)
 82 !                  WRITE (istde,*) ' please reenter ICCUT:'
 83 !                  GOTO 1
 84 !               ENDIF
 85             end do
 86          ENDIF
 87       ENDIF

and the corresponding section in the the mpi version:

 42 !   Determine the physical effects specifications
 43 !
 44       DIAG = .FALSE.
 45       IF (DIAG) THEN
 46          LFORDR = .FALSE.
 47          do i = 1,100
 48             ICCUT(i) = 0
 49          end do
 50       ELSE
 51          IF (NDEF /= 0) THEN
 52             LFORDR = .TRUE.
 53          ELSE
 54             LFORDR = .FALSE.
 55          ENDIF
 56          IF (LFORDR) THEN
 57             WRITE (istde,*) 'The contribution of CSFs 1 -- ICCUT will',&
 58                             ' be treated variationally;'
 59             WRITE (istde,*) 'the remainder perturbatively; enter ICCUT:'
 60             do i = 1,nblock
 61               write(istde,*) 'Give ICCUT for block',i
 62     1         READ *, ICCUT(i)
 63               IF ((ICCUT(i) <= 1).OR.(ICCUT(i) >= ncfblk(i))) THEN
 64                 WRITE (istde,*) 'GETINF: ICCUT must be greater than 1',&
 65                                 ' and less than ',ncfblk(i)
 66                 WRITE (istde,*) ' please reenter ICCUT:'
 67                 GOTO 1
 68               ENDIF
 69             end do
 70          ENDIF
 71       ENDIF

Limitation of ASF serial no for each block in "rmcdhf"

Dear All,

While running rcsfgenetrate for W8+ (66 electrons), with reference

4d(10,c)5s(2,i)4f(14,i)5p(3,i)5d(1,i)
4d(10,c)5s(2,i)4f(13,i)5p(4,i)5d(1,i)
4d(10,c)5s(2,i)4f(12,i)5p(5,i)5d(1,i)

I got

11 blocks were created
       block  J/P            NCSF
           1    0-             40
           2    1-            110
           3    2-            156
           4    3-            173
           5    4-            160
           6    5-            127
           7    6-             88
           8    7-             52
           9    8-             25
          10    9-              9
          11   10-              2

however, while running "rmchdf" I am getting "CONVRT: Length of CNUM inadeuate."
but if I am Entering ASF serial numbers for each block, less than 100 (even 99 for each block) it is running fine.
Is there any restriction for no. of levels (ASF) that can be calculated for each block?
It seems that increasing the length of CNUM can resolve this problem.

Thank You!
   

Compiling GRASP using gfortran-10 fails

Compiling with the latest version of gfortran, i.e. 10, fails with the following error:

/tmp/grasp/src/lib/lib9290/iniest2.f90:82:23:

   79 |          CALL DCOPY (NS, VEC(NS*(J-1)+1), 1, BASIS(NCF*(J-1)+1), 1)
      |                         2
......
   82 |       CALL DCOPY (NIV, EIGVAL, 1, BASIS(NIV*NCF+1), 1)
      |                       1
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)
make[2]: *** [src/lib/lib9290/CMakeFiles/9290.dir/iniest2.f90.o] Error 1
make[1]: *** [src/lib/lib9290/CMakeFiles/9290.dir/all] Error 2
make: *** [all] Error 2

It seems it's the same problem as scipy/scipy#11611

Compiling with version 9 works, i.e. on a Mac where I have multiple versions installed, I can do

> FC=gfortran-9 ./configure.sh
> cd build && make

rmcdhf_mem meminfo on MacOS

The new rmcdhf_mem codes are really great, but they are currently limited to GNU/Linux systems due to the use of /proc/meminfo to obtain the memory usage. We should try to come up with a way to obtain this info in a more general way. If there is no obvious solution, given how many are using GRASP on their Mac laptops, we might have to remove this nice feature...

Any ideas?

Tag: @gaigalas @tspejo @cffischer @bieronjacek @mrgodef

running rangular_mpi and rmchdf_mpi very slow

when running not very large CSFs, the rangluar_mpi is very slow. CPU% <40, MEM% only 0.1.
rmchdf_mpi 1 of 20 is about 100 cpu%, , 3 of 20 are about 6%. The others are only about 1%.
So, how to improve! Increase MEM or change supercomputer?

Biorthonormal transformation (rbiotransform)

During running the rbiotransform package for the initial files (out-c2-9.c , out-c2-9.cm , out-c2-9.w) and the final files ( out-c3-9.c , out-c3-9.cm , out-c3-9.w) it gave incomplete output as in the file ( out-rbiotransform ) although it is going well for GRASP2K version. I attached all the needed files (note remove all .txt extensions from all files )

out-c3-9.w.txt
isodata.txt
out-c2-9.w.txt
out-c2-9.c.txt

out-rbiotransform.txt
out-c3-9.cm.txt
out-c2-9.cm.txt

jj2lsj Level Index Limitations

To whom it may concern,

I am running some CSF files from FAC through Grasp's jj2lsj module. It seems that the jj2lsj outputs "*.uni.lsj.lbl" cannot print out triple-digit level indexes, only single and double digit levels. Triple digit level indexes are denoted with "**". Is there a workaround for this issue?

node counting threshold parameter

Wave functions such as c_1 | 1s(2) 1S > + c_2 |1s 2s 1S> have energy expressions with off-diagonal I(a,b) integrals for which the stationary solution for 1s has an additional node. The node counting procedure will dismiss nodes associated with small amplitudes when varying both orbitals. In this case, the 1s orbital is still classified as nodeless. This amplitude is set in the rmcdhf90/getscd.f90 by the parameter THRESH = 0.05D0. While testing this case, CYC and CFF found that the iterations converged when the THRESH parameter was increased and we recommend the use of THRESH = 0.10D0 . Systematic tests have not yet been performed but it is possible that, with this change, the LBL process may be delayed, so that more orbitals can be varied simultaneously and only correlation orbitals with small energy corrections are included by the LBL processs.

Configuration for the compiler

In the script make_environment_gfortran,

export FC_MPI="mpifort"

seems should be changed to something like

export FC_MPI="mpif90"

rci_mpi not working with openmpi 4.0.0 on Mac systems

Dear all,

I encountered some issues with the mpi programs of GRASP2018.
I have recently upgraded openmpi to 4.0.0 on Michel's MAC computer (macOS 10.14.3)
Everything was working fine before the upgrade but now I am unable to run correctly rci_mpi.
The program starts correctly but gets stucks right after/during computing Breit integrals.

Here is the output:

...
Block            1    ncf =         2386  id =  1/2+
1
Computing     1093303  Rk integrals
Computing    10433675  Breit integrals of type 1

I already asked Jon about this because I knew he is also using a mac computer. He confirmed that GCC 8.2.0 + OPENMPI 3.1.3 works fine, while GCC 8.3.0 + OPENMPI 4.0.0 does not.

Please let me know if you have any idea on how to solve this.
I attached the files needed to reproduce the problem.

Sacha

test_mpi.zip

clean script

In the clean script for example4, replace "cd ../tmp_mpi"
with " cd ./tmp_mpi"

Lagrange multipliers for closed shells

My recollection is that some time in the past, the GRASP code was such that peel orbitals for closed subshells resulted in Lagrange multipliers between subshells of the same symmetry that were in the list of Core subshells. Now Lagrange multipliers between these orbitals have been removed. This is OK when all orbitals are varied but is wrong when some orbitals are fixed.

When a pair of orbitals of the same symmetry are varied and both are associated with closed subshells, the variational equations do not have a unique solution, and the desired solution corresponds to a solution for which the Lagrange multiplier between the pair is zero (an hence can be ignored). But when one orbital is fixed the Lagrange multiplier for a stationary solution needs to be determined. So, in 3s(2) Mg, for example, if I want to compute the 3s orbital in fixed 1s(2)2s(2)2p(6) core, Lagrange multipliers will be needed for the 1s3s and 2s3s pairs of orbitals. The GRASP2018 core omits these orbitals and results may not have good accuracy. This error needs to be corrected.

Runtime floating point exception in RMCDHF

Hi and thanks for great work!

Performing the example 1 from the handbook chapter 7, we encounter the following error in the last step i.e. RMCDHF completes (but does not break) with the following exception:

Floating point exceptions
IEEE underflow Flag
IEEE DENORMAL

The following command rsave 2s_2p_DF will not execute thereafter. We are running grasp2018 using gfortran and mpifort version 8.3.0 and mpirun 3.1.3. For the ease use we have created a docker image which we use and of course you are also welcome to check.

https://hub.docker.com/r/xaratustrah/grasp

would appreciate any help on this, if case this is related to compiler or any other thing.

Benchmark calculations and rwfnestimate

I have been running Benchmark cases that test the capability and limitations of our code -- examples are ground states of He, Li, and Be as well as Be 2s2p 1P, 3P and its fine-structure splitting. I remember in the 1990's He 1s2 would fail at n=5. Did someone else do better? I went as far as n=9. Critical in all cases was getting contracted screened hydrogenic estimates for initial estimates of high-n correlation orbitals. So I have modified rwfnestimate to have option 4) that allows the user to increase the effective Z so that the initial estimate is a contracted hydrogenic orbital. In the layer approach where only the last layer needs to be hydrogenic,, the same Z_eff is used for all orbitals.
I will submit a tar-file to Jon Grumer.

Fix argument mismatches in the code base

As evident from #46 and #76, we have some argument mismatches in the code base. GCC 10 is just finally strict enough to catch these. This leads to errors like:

/tmp/grasp/src/lib/lib9290/iniest2.f90:82:23:

   79 |          CALL DCOPY (NS, VEC(NS*(J-1)+1), 1, BASIS(NCF*(J-1)+1), 1)
      |                         2
......
   82 |       CALL DCOPY (NIV, EIGVAL, 1, BASIS(NIV*NCF+1), 1)
      |                       1
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)
make[2]: *** [src/lib/lib9290/CMakeFiles/9290.dir/iniest2.f90.o] Error 1
make[1]: *** [src/lib/lib9290/CMakeFiles/9290.dir/all] Error 2
make: *** [all] Error 2

As suggested in the GCC manual, we should investigate and fix them (rather than disabling the warning/error):

Depending on their nature, argument mismatches have the potential to cause the generation of invalid code and, hence, should be investigated.

As a workaround, however, it is still possible to build GRASP on GCC 10 if -fallow-argument-mismatch is passed. With CMake, this can be done by configuring the build as:

mkdir build/
cd build/
cmake .. -DCMAKE_Fortran_FLAGS="-fallow-argument-mismatch"

And with the Makefile-based build (ref: #46 (comment)) it should be sufficient to set the following before calling make:

export FC_FLAGS="$FC_FLAGS -fallow-argument-mismatch"

setmcp: nblock=0 issue when running rmcdhf_mpi on HPC cluster

Hello,

I am trying to use GRASP2018 on my university's HPC cluster. I managed to compile the package correctly and the non-mpi versions of the programs work as expected in an interactive session.

Now, say I want to submit a job file to the HPC cluster (which is controlled by Sun Grid Engine) consisting in a single run of rmcdhf_mpi on 16 cores with the script below:

Screenshot 2021-12-07 at 17 27 23

The executables are on the path, and the job is launched from the directory containing all needed files (mcp.*, isodata, etc as well as the 'disks' file).

What I understand from the output of SGE (text file attached) is that rmcdhf_mpi gets launched and understands where to store the temporary data according to the disks file, but doesn't seem to understand where to get the input files and therefore set the number of blocks to 0 which aborts the calculation.

job_output.txt

Any idea how to solve that? I am probably making very stupid mistakes but I am only a beginner. 😄
Thanks,
Rodolphe

rtab*.f90 code

Hi everyone,
I find the rtab*.f90 program files convert configurations' quantum labels may not be considered the occupation of electrons in d, f ,g ... subshells over ten. So I changed some code to fix it.

   !  Replace (n) with ^n

   if ((labelstring(i:i).eq.'(').and.(labelstring(i+2:i+2).eq.')')) then
         labelstring(i:i) = '^'
         labelstring(i+2:i+2) = ' '
      end if
   end do

change to

!  Replace (n) with ^n

      if ((labelstring(i:i).eq.'(').and.(labelstring(i+2:i+2).eq.')')) then
         labelstring(i:i) = '^'
         labelstring(i+2:i+2) = ' '
      else if ((labelstring(i:i).eq.'(').and.(labelstring(i+3:i+3).eq.')')) then
         dummystring = labelstring
         labelstring(1:i-1) = dummystring(1:i-1)
         labelstring(i:i) = '^'
         labelstring(i+1:i+1) = '{'
         labelstring(i+2:i+3) = dummystring(i+1:i+2)
         labelstring(i+4:i+4) = '}'
         labelstring(i+5:144) = dummystring(i+4:143)
      end if

And

!  If integer1 and S, P, D, F, G, H, I, K, L, M, N and integer2 replace with (^integer1_integer2S), (^integer1_integer2P), etc

   do l = 1,15
      ncase = 0
      do i = 1,142
         do j = 48,57
            do k = 48,57
               char1 = labelstring(i:i)
               char2 = labelstring(i+1:i+1)
               char3 = labelstring(i+2:i+2)
               if ((ichar(char1).eq.j).and.(ichar(char3).eq.k).and.((char2.ne.'~').and.(char2.ne.' ').and.(char2.ne.'_'))) then
                  dummystring = labelstring
                  labelstring(1:i-1) = dummystring(1:i-1)
                  labelstring(i:i+6) = '(^'//char1//'_'//char3//char2//')'
                  labelstring(i+7:145) = dummystring(i+3:141)
                  ncase = ncase + 1
               end if
            end do
         end do
         if (ncase.eq.1) exit
      end do

change to

!  If integer1 and S, P, D, F, G, H, I, K, L, M, N and integer2 replace with (^integer1_integer2S), (^integer1_integer2P), etc

   do l = 1,15
      ncase = 0
      do i = 1,142
         do j = 48,57
            do k = 48,57
               char1 = labelstring(i:i)
               char2 = labelstring(i+1:i+1)
               char3 = labelstring(i+2:i+2)
               if ((ichar(char1).eq.j).and.(ichar(char3).eq.k).and. ((char2.ne.'~').and.(char2.ne.' ').and.(char2.ne.'_').and.(char2.ne.'}'))) then
                     dummystring = labelstring
                     labelstring(1:i-1) = dummystring(1:i-1)
                     labelstring(i:i+6) = '('//'^'//char1//'_'//char3//char2//')'
                     labelstring(i+7:145) = dummystring(i+3:141)
                     ncase = ncase + 1
               end if
            end do
         end do
         if (ncase.eq.1) exit
      end do

Have a nice day.

when compile grasp/src/lib/lib9290/iniest2.f90:82:23

Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)
make[2]: *** [src/lib/lib9290/CMakeFiles/9290.dir/build.make:933: src/lib/lib9290/CMakeFiles/9290.dir/iniest2.f90.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:652: src/lib/lib9290/CMakeFiles/9290.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.