Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-7085

Toward smaller memory allocations on wide-stripe file systems

Details

    • Improvement
    • Resolution: Fixed
    • Minor
    • Lustre 2.8.0
    • Lustre 2.8.0
    • None
    • Test nodes with 2.7.57-gea38322
    • 3
    • 9223372036854775807

    Description

      I'm testing on a fairly recent master build that includes the patch from LU-6587 (refactor OBD_ALLOC_LARGE to always do kmalloc first). That band-aid has been great a improving performance on our wide-stripe file systems, but in the face of memory pressure/fragmentation, it will still rely on vmalloc to satisfy memory requests. Since users tend to use RAM, I'd like to see if there are any opportunities to reduce allocation sizes.

      Anywhere we need to allocate sizeof(something) * num_stripes, we should check to see if there's any way to avoid per-stripe information or at least reduce sizeof(something).

      Attachments

        Issue Links

          Activity

            [LU-7085] Toward smaller memory allocations on wide-stripe file systems

            Landed for 2.8.0

            jgmitter Joseph Gmitter (Inactive) added a comment - Landed for 2.8.0

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/17476/
            Subject: LU-7085 lov: trying smaller memory allocations
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: e1e56300cac30fe8d9db296107905f5936648c3c

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/17476/ Subject: LU-7085 lov: trying smaller memory allocations Project: fs/lustre-release Branch: master Current Patch Set: Commit: e1e56300cac30fe8d9db296107905f5936648c3c

            Yang Sheng (yang.sheng@intel.com) uploaded a new patch: http://review.whamcloud.com/17476
            Subject: LU-7085 lov: trying smaller memory allocations
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 8c7f009755ef03080c599347cf0452a9bd7cf5f9

            gerrit Gerrit Updater added a comment - Yang Sheng (yang.sheng@intel.com) uploaded a new patch: http://review.whamcloud.com/17476 Subject: LU-7085 lov: trying smaller memory allocations Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 8c7f009755ef03080c599347cf0452a9bd7cf5f9
            ys Yang Sheng added a comment -

            I'll trying second way and then would doing it in first way if we still not satisfy the effect.

            Thanks,
            YangSheng

            ys Yang Sheng added a comment - I'll trying second way and then would doing it in first way if we still not satisfy the effect. Thanks, YangSheng
            yujian Jian Yu added a comment -

            Thank you, Yang Sheng. Which way would you prefer to implement?

            yujian Jian Yu added a comment - Thank you, Yang Sheng. Which way would you prefer to implement?
            ys Yang Sheng added a comment -

            After a few test and discuss with clio expert. Looks like lis_subs buffer allocation is not avoid entirely. One way is allocate every osc separately. So we can skip some osc needn't sync data. But it really bring some complexity. Other way just trying to reduce the struct size.

            Thanks,
            YangSheng

            ys Yang Sheng added a comment - After a few test and discuss with clio expert. Looks like lis_subs buffer allocation is not avoid entirely. One way is allocate every osc separately. So we can skip some osc needn't sync data. But it really bring some complexity. Other way just trying to reduce the struct size. Thanks, YangSheng
            yujian Jian Yu added a comment -

            I think we can skip invoke cl_sync_file_range while unlink file.

            Hi Yang Sheng, are you going to create a patch for this?

            yujian Jian Yu added a comment - I think we can skip invoke cl_sync_file_range while unlink file. Hi Yang Sheng, are you going to create a patch for this?
            ys Yang Sheng added a comment - - edited

            The 'lio->lis_subs'big buffer allocated from cl_io_init invoked by cl_sync_file_range. The code path as below:
            iput=> iput_final => drop_inode => ll_delete_inode => cl_sync_file_range

            I think we can skip invoke cl_sync_file_range while unlink file.

            ys Yang Sheng added a comment - - edited The 'lio->lis_subs'big buffer allocated from cl_io_init invoked by cl_sync_file_range. The code path as below: iput=> iput_final => drop_inode => ll_delete_inode => cl_sync_file_range I think we can skip invoke cl_sync_file_range while unlink file.
            yujian Jian Yu added a comment -

            There should not be a large buffer allocation for unlinks, that would be a bug I think.

            Hi Yang Sheng,

            Could you please look into the above issue? Thank you.

            yujian Jian Yu added a comment - There should not be a large buffer allocation for unlinks, that would be a bug I think. Hi Yang Sheng, Could you please look into the above issue? Thank you.

            There should not be a large buffer allocation for unlinks, that would be a bug I think.

            As for the max reply size decay, that was something I proposed during patch review but was never implemented. I agree that if access to wide striped files is rare that it may make sense to reduce the allocations again, but the hard part is to figure out what "rarely" means do that there are not continual resends.

            adilger Andreas Dilger added a comment - There should not be a large buffer allocation for unlinks, that would be a bug I think. As for the max reply size decay, that was something I proposed during patch review but was never implemented. I agree that if access to wide striped files is rare that it may make sense to reduce the allocations again, but the hard part is to figure out what "rarely" means do that there are not continual resends.

            People

              ys Yang Sheng
              ezell Matt Ezell
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: