Commit 51662c9d authored by Daniel Wortmann's avatar Daniel Wortmann
Browse files

Added Docu using mkdocs

parent bd2441f1
# Welcome to FLEUR
This is the documentation of the [MaX release of FLEUR](
For older versions of FLEUR you find the
[documentation here](
* [Installation of FLEUR]( including some hints for configuration.
* [Running FLEUR]( describes the standard workflow to perform a FLEUR calculation.
* [Using the input-generator]( to generate the full input out of a simple file.
* [XML based IO]( documentation of the input of FLEUR, its features and hints how to use them.
* [The AiiDA interface to FLEUR]( can be used to generate, run and store complex workflows.
If you are a more expert user or developer, you might be interested in:
* The [Fleur gitlab repository.](
* [Information for developers]( with the doxygen documentation of the source.
* [The doxygen documentation of the source code](|) You will also find some hints for developing FLEUR there.
* [The coverage analysis]( of the source code showing which part of the code are covered by the standard tests.
* Discussion of reasons why v27 gives [differences]( to v26.
* A [Guide/Manual]( for developers of FLEUR.
Configuration and Installation of FLEUR
We are aware of the fact that installing FLEUR can be a tricky task on many machines. While we tried to make the process
as userfriendly as possible, there are still a couple of challenges you might encounter. Please check with your system administrator to
see if all requirements for using FLEUR can be fulfilled on your system. For help register at the [MailingList]( and post your questions there.
If you manage to compile on some system that can be of general interest, please consider to adjust the '' file in the docs (Or report to if you do not know how to do that).
* [QuickInstall](#quick-guide)
* [Requirements](#requirements)
* [The script & cmake](#configure)
* [How to adjust to your configuration](#how-to-adjust-the-configuration)
* [Running the automatic tests](#ci-tests)
#Quick guide
If you are extremely lucky (and/or your system is directly supported by us) installation can be very simple:
* run the configuration script `'PATH_TO_SOURCE_CODE/`. You can do that in any directory in which the 'build' directory should be created. The script accepts some useful arguments, you can run the script with ` -h` to get a list of supported arguments.
* The script creates the build directory and runs cmake. If all goes well (look at the output) you can then change to the build directory and run `cd build; make`
* If make does not report any error you are done!
Please be aware that there are different executables that could be be build:
* `inpgen`: The input generator used to construct the full input file for FLEUR
* `fleur`: A serial version (i.e. no MPI distributed memory parallelism, multithreading might still be used)
* `fleur_MPI`: A parallel version of FLEUR able to run on multiple nodes using MPI.
Usually only the serial or the MPI version will be build. You can run the MPI-version in serial while it is of course not possible to use the non-MPI version with MPI.
You might want to [run the automatic tests](#ci-tests).
There are a couple of external dependencies in the build process of FLEUR.
**Required are:**
* *cmake*: The build process uses cmake to configure FLEUR. You should have at least version 3.0. Some options might require newer versions. Cmake is available for free at []([
* *Compilers*: You will need a Fortran compiler and a corresponding C-compiler (i.e. the two have to be able to work together via the iso-c bindings of Fortran). Please check our [list of compilers](#compilers) to see if your compiler should work.
* *BLAS/LAPACK*: These standard linear algebra libraries are required. You should try your best not to use a reference implementation from [Netlib]( but look for an optimized version for your system. In general compiler and/or hardware vendors provide optimized libraries such as the MKL (Intel) or ESSL (IBM). If you do not have access to those, check [openBLAS]([
* *libxml2*: this is a standard XML-library that is available on most systems. If it is missing on your computer you should really complain with your admin. *Please note that you might need a development package of this library as well.* To compile this library yourself, see [](
FLEUR can benefit significantly if the following further software components are available. Please be aware that some of these can be difficult to use for FLEUR and see the [Instructions for adjusting your configuration](#configure) for details on how to provide input into the build process to use these libraries.
* *MPI*: Probably most important is the possibility to compile a version of FLEUR running on multiple nodes using MPI. If you have a proper MPI installation on your system this should be straightforward to use.
* *HDF5*: FLEUR can use the HDF5 library for input/output. This is useful in two situations. On the one hand you might want to use HDF5 for reading/writing your charge density files to avoid having a machine-dependent format that can prevent portability. Also the HDF5 IO gives you some more features here. On the other hand you have to use parallel-HDF5 if you do IO of the eigenvectors in a MPI parallel calculation. This is needed if you can not store the data in memory or want to preprocess the eigenvectors. Please be aware that you need the Fortran-90 interface of the HDF5!
* *SCALAPACK/ELPA*: If you use MPI and want to solve a single eigenvalue problem with more than a single MPI-Task, you need a Library with a distributed memory eigensolver. Here you can use the SCALAPACK or [[|ELPA]] library. Please note that the ELPA library changed its API several times, hence you might see problems in compiling with it.
* *MAGMA*: FLEUR can also use the MAGMA library to run on GPUs. If you intend to use this feature, please get in contact with us.
You should also check the output of ` -h` for further dependencies and hints.
The `` script found in the main FLEUR source directory can (and should) be used to start the configuration of FLEUR.
It is called as
` [-l LABEL ] [-d] [CONFIG]`
The most used options are:
* -l LABEL specifies a label for the build. This is used to custimize the build-directory to build.LABEL and can be used
to facilitate different builds from the same source.
* -d specifies a debugging build.
* CONFIG is a string to choose one of the preconfigured configurations. It can be useful if you find one which matches your setup.
More options are available. Please check the output of ` -h`
The `` script performs the following steps:
1. It creates a subdirectory called 'build' or 'build.LABEL'. If this directory is already present, the old directory will be overwritten.
2. It copies the CONFIG dependent configuration into this directory (this is actually done in the script 'cmake/'). The special choice of "AUTO" for CONFIG will not provide any further configuration but relies completely on cmake. You can specify a config.cmake file in the working directory (from which you call to modify this AUTO mode.
3 Finally cmake is run in the build directory to determine your configuration.
If you specify -d as argument of, the string "debug" will be added to LABEL and a debugging version of FLEUR will be build, i.e. the corresponding compiler switches will be set.
You might want to check our page on
[how to adjust to your configuration]( if you run into trouble.
#How to adjust the Configuration
While `cmake` and the `` script can determine the correct compilation switches automatically in some cases (mostly those known to us), in many other instances
fine-tuning is needed. In particular you might have to:
* provide hints on which compiler to use
* provide hints on how to use libraries.
## Setting of the compiler to use
By defining the environment variables FC and CC to point to the Fortran and C compiler you can make sure that cmake uses the correct compilers. E.g. you might want to say
`export FC=mpif90`.
Please be aware that the use of CONFIG specific settings might overwrite the environment variable.
###Adding flags for the compiler
This should be done using the `-flag` option to ``. So for example you might want to say ` -flag "-r8 -nowarn"`.
In general for a compiler [not known](#compilers) in cmake/compilerflags.cmake you need at least an option to specify the promotion of standard real variables to double precision (like the `-r8`). But additional switches can/should be used.
###Adding include directories
For libraries with a Fortran-90 interface, ELPA, HDF5, MAGMA, ... you probably will have to give an include path. This can
be achieved using the `-includedir` option. So you might want to say something like
` -includedir SOMEPATH`
###Adding linker flags
To add flags to the linker you can do
* add a directory in which the linker looks for libraries with `-libdir SOMEDIR`
* add the corresponding link option(s) with e.g. `-link "-lxml2;-llapack;-lblas"`. Please note that the specification is different from the compiler switches as different switches are separated by ';'.
### Further options:
There are more options you might find useful. Please
check ` -h` for a list.
FLEUR is known to work with the following compilers:
The Intel Fortran compilers (ifort) is able to compile FLEUR. Depending on the version you might experience the following problems:
1. Versions <13.0 will most probably not work correctly
2. Version 19.0 has issues with the debugging version of FLEUR.
GFortran is knwon to work with versions newer thant 6.3.
The PGI compilers also can compile FLEUR. Here you need ad least version 18.4 but might still run into some problems.
After the build was finished you might want to run the automatic test.
Just type `ctest` in the build directory for this purpose.
Please note:
* The tests run on the computer you just compiled on. Hence a cross-compiled executable will not run.
* You can use the environment variables `juDFT_MPI` to specify any additional command needed to start FLEUR_MPI. E.g. say `export juDFT_MPI="mpirun -n2 " `to run with
two MPI tasks.
* You can use the environment variable `juDFT` to give command line arguments to FLEUR. E.g. say `export juDFT='-mem'`.
* To run a specific test only (or a range of tests) use the `-I` option of ctest (check `ctest -h` for details)
* The tests are run in Testing/work. You can check this directory to see why a specific test fails.
\ No newline at end of file
Running Fleur
Here we deal with the question of how to run FLEUR "by hand". If you are interested in running FLEUR in a scripting environment you might want to check the [AiiDA plug-in](
At first you might notice that there are several executables created in the build process. You might find:
* **inpgen**: The [input generator]( used to construct the full input file for FLEUR
* **fleur** A serial version (i.e. no MPI distributed memory parallelism, multithreading might still be used)
* **fleur_MPI** A parallel version of FLEUR able to run on multiple nodes using MPI.
In most cases you will first run the [input generator]( to create an [inp.xml]( file. Afterwards you will run fleur or fleur_MPI using this inp.xml file.
Please note that fleur/fleur_MPI will always read its setting from an inp.xml file in the current directory.
Command line options
The run-time behaviour of FLEUR can be modified using command line switches. You should understand that these switches modify the way FLEUR might operate or in some cases determine what FLEUR actually does. If you want to change the calculation setup you should modify the [inp.xml]( file.
Here we document the most relevant command line options. For a full list of available options, please run
fleur -h
**General options:**
* `-h`: Prints a help listing all command-line options.
* `-check`: Runs only init-part of FLEUR, useful to check if setup is correct.
* `-debugtime`: Prints out all starting/stopping of timers. Can be useful to monitor the progress of the run.
* `-toXML`: Convert an old **inp**-file into the new [inp.xml]( file.
**Options controlling the IO of eigenvectors/values:**
(not all are available if you did not compile with the required libraries)
* `-eig mem`: no IO, all eigenvectors are stored in memory. This can be a problem if you have little memory and many k-points. Default for serial version of FLEUR. ""Only available in serial version of FLEUR.""
* `-eig da`: write data to disk using Fortran direct-access files. Fastest disk IO scheme. *Only available in serial version of FLEUR.*
* `-eig mpi`: no IO, all eigenvectors are stored in memory in a distributed fashion. Uses MPI one-sided communication. Default for MPI version of FLEUR. *Only available in MPI version of FLEUR.*
* `-eig hdf`: write data to disk using HDF5 library. Can be used in serial and MPI version (if HDF5 is compiled for MPI-IO).
**Options controlling the Diagonalization:**
(not all are available if you did not compile with the required libraries)
* `-diag lapack`: Use standard LAPACK routines. Default in FLEUR (if not parallel EVP)
* `-diag scalapack`: Use SCALAPACK for parallel EVP.
* `-diag elpa`: Use ELPA for parallel EVP.
* Further options might be available, check `fleur -h` for a list.
Environment Variables
There are basically two environments variables you might want to change when using FLEUR.
As FLEUR uses OpenMP it is generally a good idea to consider adjusting OMP_NUM_THREADS to use
all cores available. While this might happen automatically in you queuing system you should check if you use
appropriate values. Check the output of FLEUR to standard out.
So you might want to use `export OMP_NUM_THREADS=2` or something similar.
You can use the juDFT variable to set command line switches that do not require an additional argument. For example
`export juDFT="-diag elpa"`
would run FLEUR with these command line switches.
Hybrid MPI/OpenMP Parallelization
The efficient usage of FLEUR on modern supercomputers is ensured by hybrid MPI/OpenMP parallelization. The k-point loop and the eigenvector problem
are parallelized via MPI (Message Passing Interface). In addition to that, every MPI process can be executed on several computer cores with shared memory,
using either OpenMP (Open Multi-Processing) interface or multi-threaded libraries.
#MPI parallelization
* The k-point parallelisation gives us increased speed when making calculations with large k-point sets.
* The eigenvector parallelisation gives us an additional speed up but also allows us to tackle larger systems by reducing the amount of memory that we use with each MPI process.
Depending on the specific architecture, one or the other or both levels of parallelization can be used.
##k-point Parallelisation
This type of parallelization is always chosen, if the number of k-points (K) is a multiple of the number of MPI processes (P). If K/P is not integer, a mixed parallelization will be attempted and M MPI processes will work on a single k-point, so that K.M/P is integer. This type of parallelization
can be very efficient, because all three most time-consuming parts of the code (Hamiltonian matrix setup, diagonalization and generation of the new charge density) are independent for different k-points and there is no need to communicate during the calculation. That is why this type of parallelization is fine,
even if the communication between the nodes/processors is slow. The drawback of this type of parallelization is that the whole matrix must fit in the memory
available for one MPI process, i.e. sufficient memory per MPI process to solve a single eigenvalue-problem for a k-point is required. The scaling is good,
as long as many k-points are calculated and the potential generation does not get a bottleneck. The saturation of the memory bandwidth might cause the deviation
of the speedup from the ideal.
![Speedup small](img/NaCl-k-MPI.png)
Typical speedup of the k-point parallelization for a small system
(NaCl, 64 atoms, 216 k-points) on a computer cluster (Intel E5-2650V4, 2.2 GHz).
Execution time of one iteration is 3 hours 53 minutes.
##Eigenvector Parallelization
If the number of k-points is not a multiple of the number of MPI processes, every k-point will be parallelized over several MPI processes. It might be necessary
to use this type of parallelization to reduce the memory usage per MPI process, i.e. if the eigenvalue-problem is too large. This type of parallelization depends
on external libraries which can solve eigen problem on parallel architectures. The FLEUR code contains interfaces to ScaLAPACK, ELPA and Elemental. It is possible
to use HDF library if needed.
![Eigenvector parallel](img/CoCu-EVP.png)
Example of eigenvector parallelization of a calculation with 144 atoms on the CLAIX (Intel E5-2650V4, 2.2 GHz).
An example of FLEUR memory requirements depending on the amount of MPI ranks.
Test system: CuAg (256 atoms, 1 k-point). Memory usage was measured on
the CLAIX supercomputer (Intel E5-2650V4, 2.2 GHz, 128 GB per node)
## OpenMP parallelization
Modern HPC systems are usually cluster systems, i.e. they consist of shared-memory computer nodes connected through a communication network.
It is possible to use the distributed-memory paradigm (that means MPI parallelization) also inside the node, but in this case the memory available
for every MPI process will be considerably smaller. Imagine you use a node with 24 cores and 120 GB memory. If you start one MPI process it will get
all 120 GB, two will only get 60 GB and so on, if you start 24 MPI processes, only 5 GB memory will be available for each of them. The intra-node
parallelism can be utilized more efficiently when using shared-memory programming paradigm, for example OpenMP interface. In the FLEUR code the hybrid
MPI/OpenMP parallelization is realised either by directly implementing OpenMP pragmas or by usage of multi-threaded libraries. If you want to profit
from this type of parallelization, you would need ELPA and multithreaded MKL library.
Timing measurements of the GaAs system (512 atoms) on the CLAIX supercomputer
(Intel E5-2650V4, 2.2 GHz, 24 cores per node, 128 GB per node).
## Parallel execution: best practices
Since there are several levels of parallelization available in FLEUR: k-point MPI parallelization, eigenvalue MPI parallelization and multi-threaded
parallelization, it is not always an easy decision, how to use the available HPC resources in the most effective way: how many nodes does one need,
how many MPI processes per node, how many threads per MPI process. First of all, if your system contains several k-point, choose the number of MPI
processes accordingly. If the number of k-points (K) is a multiple of the number of MPI processes (P) than every MPI procces will work on a given
k-point alone. If K/P is not integer, a mixed parallelization will be attempted and M MPI processes will work on a single k-point, so that K.M/P is
integer. That means for example, that if you have 48 k-points in your system, it is not a good idea to start 47 MPI processes.
The next question is: how many nodes do I need? That depends strongly on the size of the unit cell you simulating and the memory size of the node
you are simulating on. In the table below you can find some numbers from our experience on a commodity Intel cluster with 120 GB and 24 cores per
node - if your unit cell (and hardware you use) is similar to what is shown there, it can be a good start point. The two numbers in the "# nodes"-column
show the range from the "minimum needed" to the "still reasonable". Note that our test systems have only one k-point. If your simulation crashed
with the run-out-of-memory-message, try to double your requested resources (after having checked that ulimit -s is set unlimited, of course ;)).
The recommended number of MPI processes per node can be found in the next column. As for the number of OpenMP threads, on the Intel architecture
it is usually a good idea to fill the node node with the threads (i.e. if the node consist of 24 cores and you start 4 MPI processes, you spawn
each to 6 threads), but not to use the hyper-threading.
Best values for some test cases.
Hardware: Intel Broadwell, 24 cores per node, 120 GB memory.
| Name | # k-points | real/complex | # atoms | Matrix size | LOs | # nodes | # MPI per node |
| --- | --- | --- | --- | --- | --- | --- | --- |
| NaCl | 1 | c | 64 | 6217 | - | 1 | 4 |
| AuAg | 1 | c | 108 | 15468 | - | 1 | 4 |
| CuAg | 1 | c | 256 | 23724 | - | 1 - 8 | 4 |
| Si | 1 | r | 512 | 55632 | - | 2 - 16 | 4 |
| GaAs | 1 | c | 512 | 60391 | + | 8 - 32 | 2 |
| TiO2 | 1 | c | 1078 | 101858 | + | 16 - 128 | 2 |
And last but not least - if you use the node exclusively, bind your processes and check your environment. If the processes are allowed to vagabond
through the node (which is usually default), the performance can be severely damaged.
# Welcome to MkDocs
For full documentation visit [](
## Commands
* `mkdocs new [dir-name]` - Create a new project.
* `mkdocs serve` - Start the live-reloading docs server.
* `mkdocs build` - Build the documentation site.
* `mkdocs help` - Print this help message.
## Project layout
mkdocs.yml # The configuration file.
docs/ # The documentation homepage.
... # Other markdown pages, images and other files.
The FLEUR input generator
For those, who think that the [Fleur inp.xml]( is too complicated, contains too many options or a too complex format, or those in need for defaults for their calculation, a inp-file generator is provided.
The `inpgen` executable takes a simplified input file and generates defaults for:
* the 2D lattice type (squ,p-r,c-r,hex,hx3,obl) and the symmetry information
* (in film calculations) the vacuum distance and d-tilda
* the atom types and the equivalent atoms
* muffin-tin radii, l-cutoffs and radial grid parameters for the atom types
* plane-wave cutoffs (kmax,gmax,gmaxxc).
* many more specialized parameters ...
In general the input generator does not know:
* is your system magnetic? If some elements (Fe,Co,Ni...) are in the unit cell the program sets `jspins=2` and puts magnetic moments. You might like to change jspins and specify different magnetic moments of our atom types.
* how many k-points will you need? For metallic systems it might be more than for semiconductors or insulators. In the latter cases, also the mixing parameters might be chosen larger.
* is the specified energy range for the valence band ok? Normally, it should, but it's better to check, especially if LO's are used.
You have to modify your [inp.xml]( file accordingly. Depending on your demands, you might want to change something else, e.g. the XC-functional, the switches for relaxation, use LDA+U etc. ...
#Running inpgen
To call the input generator you tyically do
inpgen <simple_file
**So please note that the program expects its input from the standard-input.**
The `inpgen` executable accepts a few command-line options. In particular you might find usefull
`-h`|list off all options
`-explicit`|Some input data that is typically not directly provided in the inp.xml file is now generated. This includes a list of k points and a list of symmetry operations.
#Basic input
Your input should contain (in this order):
* (a) A title
* (b) Optionally: input switches (whether your atomic positions are in internal or scaled Cartesian units)
* (c) Lattice information (either a Bravais-matrix or a lattice type and the required constants (see below); in a.u.)
* (d) Atom information (an identifier (maybe the nuclear number) and the positions (in internal or scaled Cartesian coordinates) ).
* (e) Optionally: for spin-spiral calculations or in case of spin-orbit coupling we need the Q-vector or the Spin-quantization axis to determine the symmetry.
* (f) Optionally: Preset parameters (Atoms/General)
## Title
Your title cannot begin with an & and should not contain an ! . Apart from that, you can specify any 80-character string you like.
## Input switches
The namelist input should start with an &input and end with a / . Possible switches are:
film=[t,f] | if .true., assume film calculation (not necessary if dvac is specified)
cartesian=[t,f] | if .true., input is given in scaled Cartesian units,
| if .false., it is assumed to be in internal (lattice) units
cal_symm=[t,f] | if .true., calculate space group symmetry,
| if .false., read in space group symmetry info (file 'sym')
checkinp=[t,f] | if .true., program reads input and stops
inistop=[t,f] | if .true., program stops after input file generation (not used now)
symor=[t,f] | if .true., largest symmorphic subgroup is selected
oldfleur=[t,f] | if .true., only 2D symmetry elements (+I,m_z) are accepted
## An example (including the title):
3 layer Fe film, p(2x2) unit cell, p4mg reconstruction
&input symor=t oldfleur=t /
## Lattice information
There are two possibilities to input the lattice information: either you specify the Bravais matrix (plus scaling information) or the Bravais lattice and the required information (axis lengths and angles).
**First variant:**
The first 3 lines give the 3 lattice vectors; they are in scaled Cartesian coordinates. Then an overall scaling factor (aa) is given in a single line and independent (x,y,z) scaling is specified by scale(3) in a following line. For film calculations, the vacuum distance dvac is given in one line together with a3.
Example: tetragonal lattice for a film calculation:
1.0 0.0 0.0 ! a1
0.0 1.0 0.0 ! a2
0.0 0.0 1.0 0.9 ! a3 and dvac
4.89 ! aa (lattice constant)
1.0 1.0 1.5 ! scale(1),scale(2),scale(3)
The overall scale is set by aa and scale(:) as follows: assume that we want the lattice vectors to be given by
a_i = ( a_i(1) xa , a_i(2) xb , a_i(3) xc )
then choose aa, scale such that: xa = aa * scale(1)), etc. To make it easy to input sqrts, if scale(i)<0, then scale = sqrt(|scale|) Example: hexagonal lattice
a1 = ( sqrt(3)/2 a , -1/2 a , 0. )
a2 = ( sqrt(3)/2 a , 1/2 a , 0. )
a3 = ( 0. , 0. , c=1.62a )
You could specify the following:
0.5 -0.5 0.0 ! a1
0.5 0.5 0.0 ! a2
0.0 0.0 1.0 ! a3
6.2 ! lattice constant
-3.0 0.0 1.62 ! scale(2) is 1 by default
**Second variant:**
Alternatively, you may specify the lattice name and its parameters in a namelist input, e.g.
&lattice latsys='tP' a=4.89 c=6.9155 /
The following arguments are implemented: `latsys`, `a0` (default: 1.0), `a`, `b` (default: `a`), `c` (default: `a`), `alpha` (90 degree), `beta` (90 degree), `gamma` (90 degree). Hereby, `latsys` can be chosen from the following table (intended to work for all entries, up to now not all lattices work). `a0` is the overall scaling factor.
full name | No| short| other_possible_names| Description | Variants
simple-cubic | 1 |cub | cP, sc | cubic-P
face-centered-cubic | 2 |fcc | cF, fcc| cubic-F
body-centered-cubic | 3 |bcc | cI, bcc| cubic-I
hexagonal | 4 |hcp | hP, hcp| hexagonal-P | (15)
rhombohedral | 5 |rho | hr, r,R| hexagonal-R | (16)
simple-tetragonal | 6 |tet | tP, st | tetragonal-P
body-centered-tetragonal | 7 |bct | tI, bct| tetragonal-I
simple-orthorhombic | 8 |orP | oP | orthorhombic-P
face-centered-orthorhombic| 9 |orF | oF | orthorhombic-F
body-centered-orthorhombic| 10 |orI | oI | orthorhombic-I
base-centered-orthorhombic| 11 |orC | oC, oS | orthorhombic-C, orthorhombic-S |(17,18)
simple-monoclinic | 12 |moP | mP | monoclinic-P
centered-monoclinic | 13 |moC | mC | monoclinic-C |(19,20)
triclinic | 14 |tcl | aP|
full name | No| short| other_possible_names| Description
hexagonal2 | 15| hdp | | hexagonal-2 (60 degree angle)
rhombohedral2 | 16| trg |hR2,r2,R2|hexagonal-R2
base-centered-orthorhombic2 |17| orA |oA| orthorhombic-A (centering on A)
base-centered-orthorhombic3 |18| orB |oB| orthorhombic-B (centering on B)
centered-monoclinic2 |19| moA |mA| monoclinic-A (centering on A)
centered-monoclinic3 |20| moB |mB| monoclinic-B (centering on B)
You should give the independent lattice parameters `a,b,c` and angles `alpha,beta,gamma` as far as required.
## Atom information
First you give the number of atoms in a single line. If this number is negative, then we assume that only the representative atoms are given; this requires that the space group symmetry be given as input (see below).
Following are, for each atom in a line, the atomic identification number and the position. The identification number is used later as default for the nuclear charge (Z) of the atom. (When all atoms are specified and the symmetry has to be found, the program will try to relate all atoms of the same identifier by symmetry. If you want to manipulate specific atoms later (e.g. change the spin-quantization axis) you have to give these atoms different identifiers. Since they can be non-integer, you can e.g. specify 26.01 and 26.02 for two inequivalent Fe atoms, only the integer part will be used as Z of the atom.)
The input of the atomic positions can be either in scaled Cartesian or lattice vector units, as determined by logical `cartesian` (see above). For supercells, sometimes more natural to input positions in scaled Cartesian.
A possible input (for CsCl ) would be:
55 0.0 0.0 0.0
17 0.5 0.5 0.5
or, for a p4g reconstructed Fe trilayer specifying the symmetry:
26 0.00 0.00 0.0
26 0.18 0.32 2.5
&gen 3
-1 0 0 0.00000
0 -1 0 0.00000
0 0 -1 0.00000
0 -1 0 0.00000
1 0 0 0.00000
0 0 1 0.00000
-1 0 0 0.50000
0 1 0 0.50000
0 0 1 0.00000 /
Here, `&gen` indicates, that just the generators are listed (the 3×3 block is the rotation matrix [only integers], the floating numbers denote the shift); if you like to specify all symmetry elements, you should start with `&sym`. You have furthermore the possibility to specify a global shift of coordinates with e.g.
&shift 0.5 0.5 0.5 /
or, to introduce additional scaling factors
&factor 3.0 3.0 1.0 /
by which your atomic positions will be divided (the name "factor" is thus slightly counterintuitive).
## Ending an input file
If inpgen.x should stop reading the file earlier (e.g. you have some comments below in the file) or if inpgen.x fails to recognize the end of the input file (which happens with some compilers), one can use the following line:
`&end /`
## Special cases
### Film calculations
In the case of a film calculation, the surface normal is always chosen in z-direction. A two-dimensional Bravais lattice correspond then to the three-dimensional one according to the following table:
square | primitive tetragonal
primitive rectangular | primitive orthorhombic
centered rectangular | base centered orthorhombic
hexagonal | hexagonal
oblique | monoclinic
The z-coordinates of all atoms have to be specified in Cartesian units (a.u.), since there is no lattice in the third dimension, to which these values could be referred. Since the vacuum boundaries will be chosen symmetrically around the z=0 plane (i.e. -dvac/2 and dvac/2), the atoms should also be placed symmetrically around this plane.
The initial values specified for `a3` and `dvac` (i.e. the third dimension, see above) will be adjusted automatically so that all atoms fit in the unit cell. This only works if the atoms have been placed symmetrically around the z=0 plane.
### Spin-spiral or SOC
If you intend a spin-spiral calculation, or to include spin-orbit coupling, this can affect the symmetry of your system:
* a spin spiral in the direction of some vector q is only consistent with symmetry elements that operate on a plane perpendicular to q, while
* (self-consistent) inclusion of spin-orbit coupling is incompatible with any symmetry element that changes the polar vector of orbital momentum L that is supposed to be parallel to the spin-quantization axis (SQA)
Therefore, we need to specify either q or the SQA, e.g.:
&qss 0.0 0.0 0.1 /
(the 3 numbers are the x,y,z components of q) to specify a spin-spiral in z-direction, or
&soc 1.5708 0.0 /
(the 2 numbers are theta and phi of the SQA) to indicate that the SQA is in x-direction.
Be careful if symmetry operations that are compatible with the chosen q-vector relate two
atoms in your structure, they also will have the same SQA in the muffin-tins!
## Preset parameters
### Atoms
After you have given general information on your setup, you can specify a number of parameters for one or several atoms that the input-file generator will use while generating the inp file instead of determining the values by itself. The list of parameters for one atom must contain a leading `&atom` flag and end with a `/`. You have to specify the atom for which you set the parameters by using the parameter `element`. If there are more atoms of the same element, you can specify the atom you wish to modify by additionally setting the `id` tag. All parameters available are
id=[atomic identification number] | identifies the atom you wish to modify.
z=[charge number] | specifies the charge number of the atom.
rmt=[muffin-tin radius] | specifies a muffin-tin radius for the atom to modify.
dx=[log increment] | specifies the logarithmic increment of the radial mesh for the atom to modify.
jri=[# mesh points] | specifies the number of mesh points of the radial mesh for the atom to modify.
lmax=[spherical angular momentum] | specifies the maximal spherical angular momentum of the atom to modify.
lnonsph=[nonspherical angular momentum]| specifies the maximal angular momentum up to which non-spherical parts are included to quantities of the atom to modify.