⚠ Unofficial · Not affiliated with LLNL or Spack · spack.io →
Step 1 of 4
What software are you building?
Loading packages… Search or browse by category.
Step 2 of 4
Add-ons & companion tools
Recommended extras for your software. Pre-selected ones are commonly used together.
Step 3 of 4
MPI & Compiler
Recommendations are shown based on your software choice.
MPI Implementation
OpenMPI 5.0.3
Best compatibility, UCX transport, widest software support
Most common
MPICH 4.2
ANL reference implementation, good for Cray/Shasta
MVAPICH2 3.0
Best InfiniBand HDR/NDR performance (Ohio State)
Intel MPI 2021.13
Best on Intel clusters, Omni-Path and Ethernet
Needs oneAPI
No MPI
Serial or shared-memory only (OpenMP)
Compiler
GCC 11.3.1
Rocky Linux 9 / RHEL 9 default — safest choice
Most compatible
GCC 12.3
Better AVX-512, C++20, recommended for GROMACS
GCC 13.3
Latest stable, best auto-vectorization, C++23
GCC 14.2
Cutting edge — check cluster availability first
Intel oneAPI 2024 (icx/ifx)
icx/ifx — fastest on Intel Xeon, best for WRF/LAMMPS
+15-30% on Intel
LLVM/Clang 18
Modern C++, good sanitizers, cross-platform
Step 4 of 4
Hardware
Target architecture and GPU support.
GPU / Accelerator
CPU only
Standard compute nodes — no GPU
Most clusters
NVIDIA CUDA 12.4
H100, H200, RTX 4090 — latest Hopper
H100 / H200
NVIDIA CUDA 12.1
A100, RTX 3090/4080 — Ampere/Ada
NVIDIA CUDA 11.8
V100, A30 — older cluster nodes
AMD ROCm 6.1
MI300X, MI250X — Frontier/LUMI/El Capitan
AMD ROCm 5.7
MI200 series — older AMD GPU clusters
CPU Architecture
x86_64 v3
Haswell+ (2014+) — broadest compatibility
Safest choice
x86_64 v4
Skylake-X+ (2017+) — full AVX-512
Intel Ice Lake / Sapphire Rapids
Ice Lake (2021), SPR (2023) — AMX, AVX-512 VNNI
AMD EPYC Milan (Zen3)
AMD Milan — common in 2021-2023 clusters
AMD EPYC Genoa (Zen4)
AMD Genoa/Bergamo — AVX-512, new clusters
ARM Neoverse V1
AWS Graviton3, Fugaku A64FX — ARM HPC
ARM Neoverse V2
AWS Graviton4, NVIDIA Grace — latest ARM
IBM Power9 / Power10
Summit, LUMI-G — IBM Power architecture
Your spack.yaml is ready
Custom HPC Environment

What's included and why

    📁 How to use this file
    Disclaimer: This YAML is a starting point — verify variants with spack info <pkg> before running on production systems. spack.yaml specs may need tuning for your specific cluster configuration.
    1. Download spack.yaml (or copy) — do not paste directly into the terminal
    2. spack env create myenv ./spack.yaml
    3. spack env activate -p myenv
    4. spack concretize --fresh  — spack resolves all dependencies automatically
    5. spack install
    spack.yaml only lists your top-level packages — spack handles all libraries, compilers and dependencies automatically.