This repository contains scripts and configurations for running and generating data using the SPPARKS software on a High-Performance Computing (HPC) environment. The scripts have been tested on the Snellius supercomputer.
- About SPPARKS
- Prerequisites and Environment Setup
- Generate your Configuration File
- Execute SPPARKS
- Final Notes
- Acknowledgement
SPPARKS is a parallel Monte Carlo code for on-lattice and off-lattice models that includes algorithms for kinetic Monte Carlo (KMC), rejection kinetic Monte Carlo (rKMC), and Metropolis Monte Carlo (MMC).
It is developed by Sandia Labs, and it is used for modelling additive manufacturing processes via Potts model simulations which evolve microstructure in the presence of a moving laser spot which heats material.
Page and official documentation: https://spparks.github.io/
-
Install SPPARKS locally:
SPPARKS (and its dependency Stitch) is currently in the process of being integrated into the EasyBuild community GitHub repository (https://github.com/easybuilders). This integration aims to facilitate easier access and management of SPPARKS installations within the scientific and engineering communities.
To load SPPARKS from the proposed changes, run the following commands:
module load 2022 eb/4.9.1 eblocalinstall --from-pr 18049 --include-easyblocks-from-pr 2948 -r --rebuild eblocalinstall --from-pr 18050 --include-easyblocks-from-pr 2948 -r --rebuild
Now you can load it as a module:
module purge module load 2022 module load spparks/16Jan23-foss-2022a
-
Activate the Virtual Environment and Install Dependencies:
Load the Python module and create a virtual environment to manage your Python packages.
module load Python/3.10.4-GCCcore-11.3.0 python -m venv venv
After activating the virtual environment, install the required libraries:
source venv/bin/activate pip install numpy pip install PyYAML
-
Clone the Repo
Finally, Get the scripts from this repo.
git clone git@github.com:sara-nl/spparks_hpc.git cd spparks_hpc
This step involves creating potential configurations from a predefined parameter space which is specified in a YAML file.
You can run the script by submitting a job to the cluster using sbatch
:
sbatch run_config_gen.sh
This script will:
- Read the parameters from a YAML file.
- Create possible permutations.
- Write the parameters configurations to single/multiple config files.
Having set the parameter space in proper configurations, these configurations are now executed on SPPARKS to create the dataset.
You can run SPPARKS by submitting a job:
sbatch run_spparks.sh
The script will:
- Copy relevant files to scratch memory.
- Go over the config file to fetch the correct parameters configuration.
- Amend the spparks input script with the correct parameters values.
- Run SPPARKS with MPI.
Important: remember to copy inside the working folder your own SPPARKS input scripts. Input scripts are named in.*
and to see how are structured and what commands they contain see SPPARKS Commands.
- The scripts are tested on Snellius; Make sure to have enough memory space to generate the data listed in the config file.
- For more information about getting access to Snellius, refer to the Access to compute services page.
- For more detailed information about SPPARKS parameters and options, refer to the Official SPPARKS Documentation.
- For information about SPPARKS input scripts, refer to SPPARKS Commands.
- Parts of the created data (2D slices) have been uploaded to this Zenodo repository.
This project is a collaborative effort between SURF and ESA (TEC Directorate), as part of the digitalization program focused on advanced manufacturing for space applications.
It has been made possible with the support of the EuroCC project implementation in the Netherlands (NCC Netherlands), funded by the European High-Performance Computing Joint Undertaking (Grant Agreement 101101903).
If you have any question about the code or methods used in this repository you can reach out to monica.rotulo@surf.nl and michael.mallon@esa.int.