Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-181

Lustre memory usage

    XMLWordPrintable

Details

    • Improvement
    • Resolution: Incomplete
    • Major
    • None
    • None
    • None
    • 4561

    Description

      This is quote from Andreas:

      "Originally my thought was 4kB per inode (about 2kB for the inode itself, and 2kB for the ldlm_lock+ldlm_resource), but then I realized that we have an ldlm_lock (over 400 bytes today) for every client that is caching this resource.

      That means on a filesystem with 100k clients it consumes 800kB for every pointer in struct ldlm_lock for every inode cached by all the clients. It consumes 50MB for ldlm_lock for all of the clients to cache a single inode. That equates to only 20 locks per GB of RAM, which is pretty sad.

      Taking a quick look at struct ldlm_lock, there is a ton of memory wastage that could be avoided quite quickly simply by aligning the fields better for 64-bit CPUs. There are a number of other fields, like l_bl_ast that can be made smaller (it is a boolean flag that could at least be shrunk to a single byte, and stuck with the other "byte flags"), and l_readers/l_writers are only > 1 on a client, and it is limited to the number of threads concurrently accessing the lock so 16 bits is already overkill.

      There are also fields like l_blocking_ast, l_completion_ast, l_glimpse_ast, and l_weigh_ast that are almost always identical on a client or server, and are determined at compile time, so it would be trivial to replace them with a pointer to a pre-registered or even static struct ldlm_callback_suite, saving 2.4MB per widely-cached inode alone.

      There are also fields that are only ever used on the client or the server, and grouping those into a union would not only save memory, I think it would clarify the code somewhat to better understand how the fields in a lock are used."

      Attachments

        Issue Links

          Activity

            People

              liang Liang Zhen (Inactive)
              liang Liang Zhen (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: