Cluster Deployment Guide
Step-by-step: take what you learned here to a real HPC cluster.
This guide assumes you have SSH access to a cluster running Linux (RHEL/Rocky/CentOS/Ubuntu),
and that you have write access to either your home directory or a shared project directory
(typically /projects/ or /software/).
You do not need root access — Spack installs entirely in user space.
Get Spack on the cluster
Clone Spack into your home or shared project directory. Use a tagged release for stability.
cd $HOME # or /projects/yourgroup/software git clone -c feature.manyFiles=true \ https://github.com/spack/spack.git cd spack git checkout v1.1.0 # use a stable release tag . share/spack/setup-env.sh
Add to your ~/.bashrc so Spack loads every login:
echo '. $HOME/spack/share/spack/setup-env.sh' >> ~/.bashrc
/software/spack and add setup-env.sh to the system-wide /etc/profile.d/.Detect system compilers and externals
Always find system-installed packages first. Using external CMake, Python, OpenMPI etc. saves hours of compilation.
spack compiler find spack compiler list # Find common system packages: spack external find cmake python openssl zlib \ git curl tar gzip perl autoconf automake libtool \ openssh slurm pmix hwloc libfabric
Review and edit ~/.spack/packages.yaml to ensure the right versions are picked:
spack config edit packages
Configure for the cluster architecture
Set the correct target architecture and compiler flags. Wrong flags = slow binaries or crashes.
# ~/.spack/packages.yaml
packages:
all:
target: [x86_64_v3] # Haswell and newer (most clusters)
# or: skylake, icelake, zen3, zen4, neoverse_v1 (ARM)
require: "%gcc@11.3.1"
mpi:
require: openmpi # or cray-mpich, mpich, intel-mpi
blas:
require: openblas # or intel-mkl, cray-libsci
# Mark MPI as external if cluster provides it:
openmpi:
externals:
- spec: openmpi@4.1.6 +pmi
prefix: /usr/mpi/gcc/openmpi-4.1.6/
buildable: false # never rebuild cluster MPI
lscpu | grep "Model name" or uname -m. For Intel Ice Lake use target: [icelake]. For AMD EPYC 3rd gen use target: [zen3].Add a binary cache (mirror)
The E4S binary cache provides pre-built packages for common HPC configurations. Using it cuts installation time from hours to minutes.
spack mirror add E4S https://cache.e4s.io # Trust the E4S signing key: spack buildcache keys --install --trust # Verify it works: spack buildcache list openmpi
For air-gapped clusters, create a local mirror first on a connected machine:
# On a machine with internet access: spack mirror create -d /path/to/mirror \ openmpi openblas fftw hdf5 petsc # Copy /path/to/mirror to the cluster, then: spack mirror add local file:///shared/spack-mirror
Create and use a Spack environment
Always use environments in production. They make installs reproducible and shareable.
mkdir -p ~/envs/cfd-project && cd ~/envs/cfd-project
# Create spack.yaml (copy from Templates page)
cat > spack.yaml << 'EOF'
spack:
specs:
- openmpi fabrics=ucx
- openblas threads=openmp
- fftw +mpi +openmp
- hdf5 +mpi
- petsc +mpi +hypre +hdf5
packages:
mpi:
require: openmpi
blas:
require: openblas
concretizer:
unify: true
reuse: false # set true to reuse existing installs
EOF
spack env activate .
spack concretize --fresh
spack install -j8 # adjust -j to your node's core count
spack.yaml and spack.lock to git for full reproducibility. Anyone can recreate the exact environment from these two files.Set up Lmod modules
Generate Lmod modulefiles so cluster users can load packages with module load.
# Generate Lmod modules: spack module lmod refresh --delete-tree # Add to ~/.bashrc (or /etc/profile.d/): export MODULEPATH=$HOME/spack/share/spack/lmod/linux-rocky9-x86_64_v3/Core:$MODULEPATH # Now users can do: module avail module load openmpi/4.1.6-gcc-11.3.1 module load hdf5/1.14.3 module load petsc/3.20.1
Or use the environment directly:
spack env activate -p ~/envs/cfd-project # All packages are now in PATH
/software/modules and add it to the system MODULEPATH.Submit a SLURM job
Load your Spack environment inside the job script to make packages available to all nodes.
#!/bin/bash #SBATCH --job-name=openfoam-run #SBATCH --nodes=4 #SBATCH --ntasks-per-node=32 #SBATCH --time=02:00:00 #SBATCH --partition=compute #SBATCH --output=job_%j.log # Load Spack environment . $HOME/spack/share/spack/setup-env.sh spack env activate -p ~/envs/cfd-project # Or use modules: # module load openmpi/4.1.6 # module load openfoam/2512 # Run OpenFOAM: decomposePar -force mpirun -np $SLURM_NTASKS icoFoam -parallel > log.icoFoam 2>&1 reconstructPar
sbatch job.sh squeue -u $USER
Share with your team and rebuild on new clusters
The spack.yaml + spack.lock pair fully describes your environment. Share it, version-control it, and rebuild on any cluster.
# On the original cluster — create buildcache:
spack buildcache create -m local --only package -u \
$(spack find --format '/{hash}')
# On a new cluster — install from buildcache:
git clone https://github.com/yourgroup/hpc-env.git
cd hpc-env
spack env activate .
spack mirror add local file:///shared/buildcache
spack install --cache-only # no recompilation!
Cray / HPE clusters (special notes)
Cray systems (Perlmutter, Archer2, Lumi) need special treatment due to vendor MPI and PE modules.
# On Cray: never build MPI — use cray-mpich spack external find cray-mpich # Use the Cray compiler wrappers if available: spack compiler find # finds cc, CC, ftn wrappers # Cray PE modules conflict with Spack — unload first: module unload PrgEnv-cray module load PrgEnv-gnu spack compiler find # ~/.spack/packages.yaml: # mpi: # require: cray-mpich
Troubleshooting checklist
When something goes wrong on a real cluster, work through this list:
1. Check the build log: spack install --log-format=junit --log-file=build.log <pkg> cat ~/.spack/logs/<pkg>-build.log | grep -i 'error\|failed' 2. Check what was actually installed: spack find -lv <pkg> 3. Verify the spec is what you expected: spack spec -I <pkg> +variant1 +variant2 4. Check for stale locks: rm ~/.spack/opt/spack/.spack-lock 5. Clean and retry: spack clean -a spack gc spack install <pkg> 6. Check the error catalog: https://learnspack.com/errors 7. Ask the Spack community: https://github.com/spack/spack/discussions https://spack.slack.com
If your cluster already has spack packages installed, add to your spack.yaml:
spack:
upstreams:
system-spack:
install_tree: /path/to/site/spack/opt/spack
mirrors:
local: file:///path/to/binary/cache
Run spack buildcache list to see available cached binaries.