Kessel is a tool to create and drive continuous integration (CI) and developer
workflows through a unified interface across multiple code projects and
environments.
It serves as a driver and integration layer for build systems and package
managers, providing a flexible library of reusable components to build and
execute complex workflows consistently.
Modern developer and continuous integration (CI) pipelines often need to run
multi-stepprocesses, such as setting up environments, generating and
configuring build systems, compiling and testing software, or deploying
dependencies. Kessel streamlines these workflows by defining them in a
consistent, composable way that works for both interactive development and
automated pipelines.
A key goal of Kessel is to bridge the gap between CI pipeline definitions and
developer command-line workflows. By offering a common abstraction for running
sequences of steps, it reduces redundancy, simplifies maintenance, and ensures
alignment between what developers do locally and what CI executes remotely.
As part of its adoption, Kessel-based deployment workflows have established a
baseline dependency configuration for commonly used HPC systems, enabling a
shared foundation for software deployment and development across multiple code
teams.
We present HARD, a performance-portable code for simulating multiphysics and
multiscale systems built on the Flexible Computational Science Infrastructure
(FleCSI). FleCSI provides a clean, extensible programming model that allows
developers to focus on numerical methods rather than low-level system details,
while offering lightweight, expressive wrappers around Kokkos that closely
resemble native C++ semantics. Guided by this philosophy, we have extended
HARD to support a broad range of Kokkos execution policies and memory
spaces.
HARD is designed to incorporate multiple physics modules, including
hydrodynamics, radiation transport, multi-material, and high-explosive
modeling, while maintaining performance portability across emerging
heterogeneous architectures such as El Capitan, Venado, and Crossroads. HARD
also inherits FleCSI’s support for multiple distributed-memory and task-based
backends, including MPI, Legion, and HPX.
In this talk, we outline core design principles, present representative
implementation examples, and report benchmark results spanning diverse
multiphysics scenarios, highlighting HARD’s performance and portability on
next-generation computing systems.
2025
High-Performance Software Foundation (HPSF) Conference
Spack makes it easy to install dependencies for our software on multiple HPC
platforms. However, there is little guidance on how to structure Spack
environments for larger projects, share common Spack installations with code
teams and utilize them in an effective way for continuous integration and
development.
This presentation will share some of the lessons learned from deploying
chained Spack installations for multiple code teams at LANL on various HPC
platforms both on site and on other Tri-Lab systems, how to structure such
deployments for reusability and upgradability, and make them deployable even on
air-gapped systems. It will also show how we utilize Spack's build facilities
to drive CMake-based projects on GitLab for continuous integration, without
having to replicate build configuration logic in GitLab files, while giving
developers an easy-to-follow workflow for recreating CI runs in various
configurations.
Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy (Online via Zoom)
Dates:
Feb 2021
Description:
Due to COVID we created an online-only version
of our course. It introduces students to the basics of building, configuring
and operating High-Performance Computing Clusters, while remotely connecting to
a preassembled and cabled but unconfigured training cluster. The goal of this course is to give students a
solid foundation for understanding the components of a cluster, how they are
typically configured, and what challenges operating such a machine
entails.
Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy
Dates:
Jan 2020 – Feb 2020
Description:
A two-week course introducing the basics of building, configuring and
operating High-Performance Computing Clusters. The goal of this course is to
give students a solid foundation for understanding the components of a
cluster, how they are typically configured, and what challenges operating
such a machine entails.
Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy
Dates:
March 2019
Description:
A two-week course introducing the basics of
building, configuring and operating High-Performance Computing Clusters. The
goal of this course is to give students a solid foundation for understanding
the components of a cluster, how they are typically configured, and what
challenges operating such a machine entails.