Introduction

Spack (https://spack.io/) is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for administrators in large supercomputing centres, where many users and application teams share common installations of software on clusters. It can also be used by users to install software environments exactly for their needs.

We provide Docker images with preinstalled Spack, its configuration for the hardware available at CSCS, and helper scripts that simplify using Spack in a Dockerfile.

Install helper script

In the image, we provide a spack-install-helper script that helps build a list of packages for a desired architecture. The script can be used as follows:

spack-install-helper --target <target arch> [--add-repo <repo>] [--only-dependencies <spec>]<list of specs>

Where

--target is a mandatory argument specifying the target architecture. Possible values are alps-zen2, alps-a100, alps-gh200, alps-mi200, alps-mi300a

--add-repo is a Spack repository (optional)

--only-dependencies install only dependencies of the spec. It is useful if you want to install a package manually, e.g. for debugging purposes, or together with --add-repo for developing spack's package.py (optional)

list of specs are any specs that can be passed to the spack install command

Building Docker images

It is good practice to keep Docker images as small as possible, with only the software needed and nothing more. To support this philosophy, we create our image in a multistage build (https://docs.docker.com/build/building/multi-stage/). In the first stage, Spack is used to install the software stack we need. In the second stage, the software installed in the first stage is copied (without build dependencies and Spack). After that, we can add to the image anything we need. We can, for example, build our software, prepare tests, ...

Docker images naming scheme

We provide two different images that can be used in the first and second stages of the multistage build. The image spack-build is used in the first stage and has Spack installed. The image spack-runtime  is the same, but without spack, only with scripts that make the software installed with spack available.

Both images are available with different versions of installed software, this is encoded in the docker tag. The tag of the spack-build image has the following scheme spack<version>-<os><version>-<arch>[version] , e.g. spack-build:spack0.21.0-ubuntu22.04-cuda12.4.1  is an image based on ubuntu 22.04  with cuda 12.4.1  and spack 0.21.0 . The tag of the spack-runtime image has the same scheme, but without spack<version>- , as spack is not installed in this image, e.g. spack-runtime:ubuntu22.04-cuda12.4.1. It is strongly recommended to always use both images with the same OS and arch for the first and the second stage.

We provide images based on Ubuntu 22.04 LTS  for three different architectures:

  •  CPU   (x86_64)
  •  CUDA (x86_64+A100 and arm64 +GH200)
  •  ROCm (x86_64+MI250)

Docker registry

We provide these images in two docker registries:

  • JFrog hosted at CSCS (jfrog.svc.cscs.ch)
    • FROM $CSCS_REGISTRY/docker-ci-ext/base-containers/public/spack-build:<tag>
    • FROM $CSCS_REGISTRY/docker-ci-ext/base-containers/public/spack-runtime:<tag>
    • Available only on the CSCS network
    • Recommended for CI/CD workflows at CSCS
  • GitHub Container Registry (github.com/orgs/eth-cscs/packages)
    • FROM ghcr.io/eth-cscs/docker-ci-ext/base-containers/spack-build:<tag>
    • FROM ghcr.io/eth-cscs/docker-ci-ext/base-containers/spack-runtime:<tag>
    • Available from everywhere
    • Recommended for manual workflows

Example Dockerfile

Use this Dockerfile template. Adjust the spack-install-helper command as needed, especially the target architecture and list of packages. Add any commands after fix_spack_install or drop all of them if you need only the software installed by spack.

# use spack to install the software stack
FROM $CSCS_REGISTRY/docker-ci-ext/base-containers/public/spack-build:spack0.21.0-ubuntu22.04-cuda12.4.1 as builder

# number or processes used for building the spack software stack
ARG NUM_PROCS

RUN spack-install-helper --target alps-gh200 \
    "git" "cmake" "valgrind" "python@3.11" "vim +python +perl +lua"

# end of builder container, now we are ready to copy necessary files

# copy only relevant parts to the final container
FROM $CSCS_REGISTRY/docker-ci-ext/base-containers/public/spack-runtime:ubuntu22.04-cuda12.4.1

# it is important to keep the paths, otherwise your installation is broken
# all these paths are created with the above `spack-install-helper` invocation
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/._view /opt/._view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh

# Some boilerplate to get all paths correctly - fix_spack_install is part of the base image
# and makes sure that all important things are being correctly setup
RUN fix_spack_install

# Finally install software that is needed, e.g. compilers
# It is also possible to build compilers via spack and let all dependencies be handled by spack
RUN apt-get -yqq update && apt-get -yqq upgrade \
 && apt-get -yqq install build-essential gfortran \
 && rm -rf /var/lib/apt/lists/*





  • No labels