Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11022

FLR1.5: "lfs mirror" usability for Burst Buffer



    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: Lustre 2.13.0
    • Labels:
    • Rank (Obsolete):


      I've been going through a simple exercise for how to use FLR to mirror/unmirror files for a burst-buffer application. The workflow would be something like:

      1. job startup script has directives that specify the list of existing files that are used for input/output, and the directory used for output
      2. job scheduler processes job startup script at appropriate time before job launch to mirror input/output files into burst buffer (flash-based OSTs in a special OST pool[*]) and mark them "prefer" so that the clients will use those replicas
      3. job scheduler specifies default directory layout for output directory, using DoM and/or FLR to keep new files on flash storage
      4. job runs and does IO entirely to flash storage
      5. optionally a background task does file resync/migration to copy modified files to HDD-based OSTs
      6. job scheduler runs post-job script to resync/migrate all of the new and modified files to HDD-based OSTs, drops flash mirror copy on input/output files if they are not used by another job in the queue. This avoids the need to scan the OSTs and manage OST space on demand.

      [*] We don't have any way to prevent users from using a pool if they want to. We need some kind of OST/pool quota to limit the amount of space a user can consume on a given OST/pool. It might be desirable to allow privileged users (e.g. job scheduler) to still create files on an OST/pool, even if it exceeds the user's quota, so that they can stage files there.

      The #1 item is not immediately in my control.

      I was trying out what commands would be used for #2. The obvious choice is lfs mirror extend -N<copies> /path/to/file, but one problem I see with this is that "-N<copies>" means add <copies> mirrors to the file, rather than make the number of mirrors = <copies>. This is problematic, since lfs mirror extend will keep on adding mirrors to the file, even if it already has mirrors >= <copies>, but not an insurmountable problem (the caller needs to use "lfs getstripe -N" to get the current number of mirror copies, then call "lfs mirror extend -N$((copies - current))" in most cases).

      • There should be an option in lfs mirror extend to specify something like -N=2 to indicate that only 2 mirrors should be created with the given parameters (though "=2" won't work because getopt_long() will just treat that the same as -N 2). I don't think it would be acceptable to change the meaning of "-N" in 2.12 after it has been available in 2.11.

      For #3 and #4 we would set a default layout on the output directory to create files with DoM + PFL layouts to keep the output files entirely on flash.

      For #5 we could use a ChangeLog user to follow files from each JobID to do resync (to HDD-based OSTs) in the background as they are closed, but it would make sense to apply a policy for this (e.g. migrate only 1/4 of incremental checkpoints out of the BB).

      For #6 it would use lfs mirror extend or lfs mirror resync to migrate files specified in the job submission script from the flash OSTs to HDD OSTs. What is difficult is to remove the flash OST replicas afterward. The lfs mirror split command requires specifying an explicit mirror ID, but lfs getstripe has no option to extract the mirror ID for a component. This raises the need for several new options:

      • lfs mirror split should allows specifying lfs getstripe options like --comp-flags=prefer or --pool=flash to remove replicas.
      • lfs mirror split should accept --component-id=<comp_ID> from lfs getstripe as removing the entire mirror.
      • lfs getstripe should allow printing the mirror ID for matching components.


          Issue Links



              adilger Andreas Dilger
              adilger Andreas Dilger
              0 Vote for this issue
              12 Start watching this issue