Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-14520

shrink ldlm_lock to fit in 512 bytes

    XMLWordPrintable

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.17.0
    • None
    • None
    • 9223372036854775807

    Description

      In patch https://review.whamcloud.com/39811 it removes l_lock from struct ldlm_lock, which also removes a 4-byte hole following l_lock, giving an 8-byte size reduction down to 544 bytes. It would be nice if we could find another 32 bytes of savings so that each lock could fit into a 512-byte allocation.

      While there isn't yet a clear path to the full reduction, there are a number of fields that are too large and could be shrunk

      • l_bl_ast_run looks like it could be a single bit (4 bytes if packed somewhere else)
      • l_lvb_len looks like it could just fit into a __u16 since this is limited (2 bytes)
        by the maximum layout size, itself capped by XATTR_SIZE_MAX less a small amount for the xattr header
      • l_lvb_type only needs 4 bits (4 bytes if packed somewhere else)
      • l_req_mode and l_granted_mode basically never change on a lock, and could fit into 8 bits by declaring the enum with a field width :8 (6 bytes)
      • l_readers and l_writers are mostly accessed as booleans, and while they still need to be counters I don't think we'll ever have 4B threads in the kernel accessing the same lock at one time. These could easily be shrunk to 24 bits (16M concurrent threads) or even a bit smaller to hold the few other bitfields (I'm not sure I'd be comfortable with only 64K threads since this might happen in a big NUMA machine), with l_req_mode and l_granted_mode put into the high bits. (2 bytes)

      That is 18 bytes, so if we could find a couple of the many {{list_head}}s that could be shared it would be enough. That would save many MB of RAM on the servers.

      Attachments

        Activity

          People

            adilger Andreas Dilger
            adilger Andreas Dilger
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: