ComIn 0.5.1
ICON Community Interface
Loading...
Searching...
No Matches
Installation

Levante

To build ICON together with ComIn use the following commands.

Its recommended to run heavy compilation tasks in an interactive session with:

salloc -A<account> -pinteractive

First, clone the icon repository:

git clone --recursive https://gitlab.dkrz.de/icon/icon-model.git icon

Create a build directory and configure it:

mkdir icon-build
cd icon-build
../icon/config/dkrz/levante.intel --enable-comin

If you want to build the python adapter add the flag --enable-bundled-python=comin and specify the PYTHON environemnt variable:

spack load /2dd4cn4 # loads python3.12
../icon/config/dkrz/levante.intel --enable-comin --enable-bundled-python=comin PYTHON=$(which python)

Finally, run the compilation:

make -j16

DWD NEC

Limitations

Note that the support for ComIn plugins written in the Python programming language is limited to the x86 NEC Vector Hosts. Native support for task on the NEC Vector Engines is currently under investigation.

Build instructions

  • Create a build directory.
    mkdir build && cd build
    mkdir VH VE
  • Build ICON on vector host.
    cd VH
    ../../config/dwd/rcl.VH.gcc --enable-comin
    make -j6
  • Build plugins on vector host.
    cd externals/comin/build
    module purge
    module load apps sx/default gcc/11.2.0 mpi/3.5.0 libfyaml/0.8-VH-gnu unsupported cmake/3.26.4
    sed -i 's/-static//g' CMakeCache.txt
    cmake -DCMAKE_C_COMPILER=mpincc -DCMAKE_C_FLAGS='-vh' -DCMAKE_Fortran_COMPILER=mpinfort -DCMAKE_Fortran_FLAGS='-vh' -DCMAKE_CXX_COMPILER=mpinc++ -DCOMIN_ENABLE_EXAMPLES=ON .
    make
  • Build ICON on vector engine.
    cd ../VE
    ../../config/dwd/rcl.VE.nfort --enable-comin
    make -j6
  • Build plugins on vector engine.
    cd externals/comin/build
    module purge
    module load sx/default nfort/5.1.0 nc++/5.1.0 mpi/3.5.0 libfyaml/0.8-sx unsupported cmake/3.26.4
    cmake -DCMAKE_C_COMPILER=mpincc -DCMAKE_Fortran_COMPILER=mpinfort -DCMAKE_CXX_COMPILER=mpinc++ -DCOMIN_ENABLE_EXAMPLES=ON .
    make

Modify the run script.

This step is almost the same as is explained for Levante_gcc except that one also must add the path of shared library of plugin(s) to VE_LD_LIBRARY_PATH.

export LD_LIBRARY_PATH="${path_to_plugin_on_VH}:$LD_LIBRARY_PATH"
export VE_LD_LIBRARY_PATH="${path_to_plugin_on_VE}:$VE_LD_LIBRARY_PATH"

or use the auxiliary function add_comin_setup . This function does the same for both vector host and vector engine automatically.

path_to_plugin=`/externals/comin/build/plugins/simple_fortran`
add_comin_setup "$path_to_plugin"
  • Run the experiment.
    cd /build/VE
    ./make_runscripts --all
    cd run
    qsub name_of_your_runscript.run

About mpi4py

To use MPI functionality from python we (recomment to) use mpi4py. However, there a some caveats for the correct installation.

On levante the module that is loaded with module load python3 is based on conda/mambaforge.

⚠️ Dont use any conda environments on a HPC system. ⚠️ Conda ships its own MPI binaries that are probably incompatible with the MPI ICON uses.

The recommended way to install mpi4py is to setup your own virtual environment:

$ # load the correct MPI (using module or spack)
$ python -m venv /path/tp/venv
$ source /path/to/venv/bin/activate
$ pip install --no-binary mpi4py mpi4py

On Levante you can use any python you can find with spack find -lvp python. Note that you have to build ComIn (i.e. the python adapter) with the same python version. (See Installation on Levante )

Standalone

ComIn-standalone setup on Levante (DKRZ)

You can build ComIn without ICON as the host model and test plugins using the standalone replay_tool that comes with ComIn.

If you plan to use the Python API, it is recommended to create a virtual environment for ComIn. For Python 3.12, use these commands:

spack load /mnm # tells mpi4py which MPI to use
/sw/spack-levante/miniforge3-24.11.3-2-Linux-x86_64-rf4err/bin/python -m venv venv
source venv/bin/activate
python -m pip install numpy mpi4py xarray # add any other packages you need

Load the modules needed to build ComIn using this command:

module load gcc/11.2.0-gcc-11.2.0 netcdf-c/4.8.1-gcc-11.2.0 netcdf-fortran/4.5.3-gcc-11.2.0
export MPI_ROOT='/sw/spack-levante/openmpi-4.1.2-mnmady'

Clone the ComIn git repository with:

module load git
git clone git@gitlab.dkrz.de:icon-comin/comin.git

Alternatively, download a public release tarball from https://gitlab.dkrz.de/icon-comin/comin.

Then follow the standard CMake workflow: create a build directory, configure and build.

mkdir comin/build
cd comin/build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCOMIN_ENABLE_EXAMPLES=ON -DCOMIN_ENABLE_PYTHON_ADAPTER=ON -DCOMIN_ENABLE_REPLAY_TOOL=ON -DBUILD_TESTING=ON -DCMAKE_C_COMPILER="${MPI_ROOT}/bin/mpicc" -DCMAKE_CXX_COMPILER="${MPI_ROOT}/bin/mpic++" -DCMAKE_Fortran_COMPILER="${MPI_ROOT}/bin/mpif90" ..
make -j6

The CMake command enables:

  • Python adapter by setting -DCOMIN_ENABLE_PYTHON_ADAPTER=ON,
  • Example plugins by setting -DCOMIN_ENABLE_EXAMPLES=ON,
  • CI/CD tests (ctest command) by setting -DBUILD_TESTING=ON.

For debugging, you can use the VERBOSE=1 option in CMake.

If you need to pass additional flags to the compiler, you can pass -DCMAKE_Fortran_FLAGS=-additional-flag to CMake. The variables for the C and C++ compilers are CMAKE_C_FLAGS and CMAKE_CXX_FLAGS.

The Intel Fortran compiler inserts linebreaks into the program output at column 80 by default, making parsing of the logs difficult. This behavior can be switched at compile time by passing the flag -no-wrap-margin, or at runtime by setting the environment variable FORT_FMT_NO_WRAP_MARGIN to YES.

Run tests in the build directory with:

ctest

If parallel tests fail, add these environment variables to the runscript in the build directory comin/build/test/parallel/run.sh:

export OMPI_MCA_osc="ucx"
export OMPI_MCA_pml="ucx"
export OMPI_MCA_btl="self"
export UCX_HANDLE_ERRORS="bt"
export OMPI_MCA_pml_ucx_opal_mem_hooks=1

(Further details can be found here)

For YAXT functionality, eg. for the replay_tool's halo synchronization feature, you must first install YAXT at a specific path. Then reconfigure by adding this CMake option: -DCMAKE_PREFIX_PATH=path_to_install_yaxt When using YAXT, set the LD_LIBRARY_PATH environment variable to include the YAXT installation path before running cmake.

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:<YAXT_INSTALL_PATH>

ComIn-standalone setup on NEC (DWD)

Stand-alone building ComIn on NEC requires separate builds for both the vector engine (VE) and the vector host (VH).

mkdir build_VH build_VE

First step: stand-alone build for VE.

bash
cd build_VE
module purge
module load sx/default nfort/5.1.0 nc++/5.1.0 mpi/3.5.0 libfyaml/0.8-sx unsupported cmake/3.26.4
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_C_COMPILER=mpincc -DCMAKE_Fortran_COMPILER=mpinfort -DCMAKE_CXX_COMPILER=nc++ -DCMAKE_CXX_FLAGS="-stdlib=libc++" -DCOMIN_ENABLE_EXAMPLES=ON ..
make

Second step: stand-alone build for VH.

cd build_VH
module purge
module load apps sx/default gcc/11.2.0 mpi/3.5.0 libfyaml/0.8-VH-gnu unsupported cmake/3.26.4
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_C_COMPILER=mpincc -DCMAKE_C_FLAGS='-vh' -DCMAKE_Fortran_COMPILER=mpinfort -DCMAKE_Fortran_FLAGS='-vh' -DCMAKE_CXX_COMPILER=mpinc++ -DCMAKE_CXX_FLAGS='-vh' -DCOMIN_ENABLE_EXAMPLES=ON ..
make