Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6585

Virtual block device (lloop)



    • Type: New Feature
    • Status: Closed
    • Priority: Minor
    • Resolution: Won't Do
    • Affects Version/s: None
    • Fix Version/s: Lustre 2.10.0
    • Labels:
    • Rank (Obsolete):


      Tracking bug for fixing the Lustre lloop driver. There are a number of improvements to be made internally to better integrate with the loop driver in the upstream kernel, which will allow removal of a lot of code that is just copied directly from the existing loop.c file.

      While most applications deal with files, in a number of cases it is desirable to export a block device interface on a client in an efficient manner. These include for making loopback images for VM hosting, containers for very small files, swap, etc. A prototype block device was created for Lustre, based on the Linux loop.c driver, but was never completed and has become outdated as kernel APIs have evolved. The goal of this project is to update or rewrite the Lustre lloop driver so that it can be used for high-performance block device access in a reliable manner.

      A further goal would be to investigate and resolve deadlocks in the lloop IO path by using preallocation or memory pools to avoid allocation under memory pressure. This could be used for swapping on the client, which is useful on HPC systems where the clients do not have any disks. When running on an RDMA network (which is typical for Lustre) the space for replies is reserved in advance, so no memory allocation is needed to receive replies from the server, unlike with TCP-based networks.

      • Salvage/replace existing prototype block device driver
      • High performance loop driver for Lustre files
      • Avoid memory allocation deadlocks under load
      • Bypass kernel VFS for efficient network IO
      • Stretch Goal: swap on Lustre on RDMA network


          Issue Links



              simmonsja James A Simmons
              adilger Andreas Dilger
              0 Vote for this issue
              15 Start watching this issue