Skip to content

Latest commit

 

History

History
34 lines (26 loc) · 2.66 KB

README.md

File metadata and controls

34 lines (26 loc) · 2.66 KB

lustre-hpc-test-setup

The purpose of this repository is to test and implement a small and simple Lustre filesystem setup, in order to:

  • become familiar with a verified Lustre Setup
  • play with features of Lustre (file striping)
  • test procedures such as the following:

Small and simple means that we do not implement (yet) MDS/OSS failover OR interconnect. So, this test setup will implement the following:

as the absolute minimum setup, all connected by a 1 Gbps grade ethernet switch.

Most of the procedures and files of this repository should also work with the Rocky and Alma Linux distributions. RHEL is used here to validate procedures for an enterprise work setup.

It is recommended that one uses the following Bill Of Materials to implement this setup:

  • One dedicated 8 x compute core, 16/32 Gb of RAM server with a RAID1 hardware SSD partition for MDS and at least one 1Gbit Ethernet NIC (Example: refurbished Dell EMC R220 server)
  • Two dedicated 8 x compute core, 16/32 Gb of RAM server with a RAID1 hardware SSD partition for serving OSTs and at least 1Gbit Ethernet NIC (Example: refurbished Dell EMC R220 server)
  • One server/workstation/laptop with 16 cores and 32 Gb of RAM, 512 Gbyte of disk space and two 1xGbit Ethernet connection to provide virtual machines for:
    • Lustre clients
    • iPXE, OS deployment and other management functionality

Although all of these hosts could be implemented using VMs, it is highly recommended to invest in separate physical machines to implement the MDS and OSS part, so that network and disk performance is realistic. The clients and other management hosts can of course run in VMs.