Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6529

Server side lock limits to avoid unnecessary memory exhaustion

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • Lustre 2.8.0
    • None
    • 3
    • 9223372036854775807

    Description

      As seen in tickets like LU-5727, we currently rely almost entirely on the good aggregate behavior of the lustre clients to avoid memory exhaustion on the MDS (and other servers, no doubt).

      We require the servers, the MDS in particular, to instead limit ldlm lock usage to something reasonable to avoid OOM conditions on their own. It is not good design to leave the MDS's memory usage entirely up to the very careful administrative limiting of ldlm lock usage limits across all of the client nodes.

      Consider that some sites have many thousands of clients across many clusters where such careful balancing and coordinated client limits may be difficult to achieve. Consider also WAN usages, where some clients might not ever reside at the same organization as the servers. Consider also bugs in the client, again like LU-5727.

      See also the attached graph showing MDS memory usage. Clearly the ldlm lock usage grows without bound, and other parts of the kernel memory usage are put under undue pressure. 70+ GiB of ldlm lock usage is not terribly reasonable for our setup.

      Some might argue that the SLV code needs to be fixed, and I have no argument against pursuing that work. That could certainly be worked in some other ticket.

      But even if SLV is fixed, we still require enforcement of good memory usage on the server side. There will always be client bugs or misconfiguration on clients and the server OOMing is not a reasonable response to those issues.

      I would propose a configurable hard limit on the number of locks (or space used by locks) on the server side.

      I am open to other solutions, of course.

      Attachments

        Issue Links

          Activity

            [LU-6529] Server side lock limits to avoid unnecessary memory exhaustion

            Another bug: The proc files accept negative values. Negative values should be rejected.

            morrone Christopher Morrone (Inactive) added a comment - Another bug: The proc files accept negative values. Negative values should be rejected.

            Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/16123
            Subject: LU-6529 ldlm: improve proc interface of lock reclaim
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: cd15ce613f958e14fd8c8a01a97cdd67cb17e249

            gerrit Gerrit Updater added a comment - Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/16123 Subject: LU-6529 ldlm: improve proc interface of lock reclaim Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: cd15ce613f958e14fd8c8a01a97cdd67cb17e249

            I agree with that watermark numbers are confusion (numbers written are not equal to the numbers reading back), I'll change it, and I'm fine to change the name as you suggested.

            For the case that we are about talking about here in this ticket, I disagree rather strongly with the claim that letting the clients decide on the lock reclaim is "ideal". It would only be reasonable to let the clients decide when the server is not under memory pressure. This entire ticket is explicitly about dealing with the situation where the server is under memory pressure.

            Maybe the wording of the comment isn't accurate, what I want to expressed is that choosing locks to be reclaimed should be done by client (because client knows better which locks are not used), server of course should decide when to start lock reclaim (because server knows memory pressure). It's not ideal, but I think it's the best we can do so far. Anyway, given it's misleading, I'm going to remove it.

            Ideal would be to integrate with the kernel shrinker instead of having these thresholds. When kernel pressure is high enough, using additional memory to send messages to the clients and then waiting a potentially very long time for a response is simply not reasonable. The server will always need to be able to summarily discard locks.

            I think that's the point why we should always have an upper threshold, it's the safe guarding for server to not enter the situation you mentioned (under severe memory pressure, don't have enough memory to revoke locks), it may not enough, and we can add other restrictions later if necessary. The lower threshold may be removed if we integrate with the kernel shrinker in the future (Let shrinker to take over in non-urgent situation).

            Discarding locks means some clients have to be evicted, I think that's the case we must try to avoid. (by some other manners, set the upper limit for instance)

            niu Niu Yawei (Inactive) added a comment - I agree with that watermark numbers are confusion (numbers written are not equal to the numbers reading back), I'll change it, and I'm fine to change the name as you suggested. For the case that we are about talking about here in this ticket, I disagree rather strongly with the claim that letting the clients decide on the lock reclaim is "ideal". It would only be reasonable to let the clients decide when the server is not under memory pressure. This entire ticket is explicitly about dealing with the situation where the server is under memory pressure. Maybe the wording of the comment isn't accurate, what I want to expressed is that choosing locks to be reclaimed should be done by client (because client knows better which locks are not used), server of course should decide when to start lock reclaim (because server knows memory pressure). It's not ideal, but I think it's the best we can do so far. Anyway, given it's misleading, I'm going to remove it. Ideal would be to integrate with the kernel shrinker instead of having these thresholds. When kernel pressure is high enough, using additional memory to send messages to the clients and then waiting a potentially very long time for a response is simply not reasonable. The server will always need to be able to summarily discard locks. I think that's the point why we should always have an upper threshold, it's the safe guarding for server to not enter the situation you mentioned (under severe memory pressure, don't have enough memory to revoke locks), it may not enough, and we can add other restrictions later if necessary. The lower threshold may be removed if we integrate with the kernel shrinker in the future (Let shrinker to take over in non-urgent situation). Discarding locks means some clients have to be evicted, I think that's the case we must try to avoid. (by some other manners, set the upper limit for instance)

            I am reopening this ticket, because I think there are a few things about the patch that really should be addressed before Lustre 2.8 is released.

            First of all, the number that one writes to the waterfall files is not necessarily the number that one reads back. This is going to be very confusing for users. The problem seems to be some kind of rounding error, where the code winds up rounding down to the closes multiple of 1 MiB. Especially confusing is to write "1M" to the file and then read back "0". "0" means that the threshold is disabled, so that is really not acceptable.

            Instead of doing multiple steps of math (including do_div) on the number supplied by the user, the number supplied by the user needs to be recorded unmolested and supplied back to the user when they read from the file.

            Next, it was pointed out to me that "watermark_mb_low" is not a terribly good name. That implies that the number can't go any lower. Perhaps better names for these files would be something like:

            • lock_limit_mb
            • reclaim_threshold_mb

            I think that might be clearer to users that "reclaim_threshold_mb" is the point where reclaim kicks in, and "lock_limit_mb" is the limit beyond which no further locks will be permitted.

            Next there is this comment, which I think is misleading:

            /*

            • FIXME:
              *
            • In current implementation, server identifies which locks should be
            • revoked by choosing locks from namespace/resource in a roundrobin
            • manner, which isn't optimal. The ideal way should be server notifies
            • clients to cancel locks voluntarily, because only client knows exactly
            • when the lock is last used.

            For the case that we are about talking about here in this ticket, I disagree rather strongly with the claim that letting the clients decide on the lock reclaim is "ideal". It would only be reasonable to let the clients decide when the server is not under memory pressure. This entire ticket is explicitly about dealing with the situation where the server is under memory pressure.

            Ideal would be to integrate with the kernel shrinker instead of having these thresholds. When kernel pressure is high enough, using additional memory to send messages to the clients and then waiting a potentially very long time for a response is simply not reasonable. The server will always need to be able to summarily discard locks.

            morrone Christopher Morrone (Inactive) added a comment - I am reopening this ticket, because I think there are a few things about the patch that really should be addressed before Lustre 2.8 is released. First of all, the number that one writes to the waterfall files is not necessarily the number that one reads back. This is going to be very confusing for users. The problem seems to be some kind of rounding error, where the code winds up rounding down to the closes multiple of 1 MiB. Especially confusing is to write "1M" to the file and then read back "0". "0" means that the threshold is disabled, so that is really not acceptable. Instead of doing multiple steps of math (including do_div) on the number supplied by the user, the number supplied by the user needs to be recorded unmolested and supplied back to the user when they read from the file. Next, it was pointed out to me that "watermark_mb_low" is not a terribly good name. That implies that the number can't go any lower. Perhaps better names for these files would be something like: lock_limit_mb reclaim_threshold_mb I think that might be clearer to users that "reclaim_threshold_mb" is the point where reclaim kicks in, and "lock_limit_mb" is the limit beyond which no further locks will be permitted. Next there is this comment, which I think is misleading: /* FIXME: * In current implementation, server identifies which locks should be revoked by choosing locks from namespace/resource in a roundrobin manner, which isn't optimal. The ideal way should be server notifies clients to cancel locks voluntarily, because only client knows exactly when the lock is last used. For the case that we are about talking about here in this ticket, I disagree rather strongly with the claim that letting the clients decide on the lock reclaim is "ideal". It would only be reasonable to let the clients decide when the server is not under memory pressure. This entire ticket is explicitly about dealing with the situation where the server is under memory pressure. Ideal would be to integrate with the kernel shrinker instead of having these thresholds. When kernel pressure is high enough, using additional memory to send messages to the clients and then waiting a potentially very long time for a response is simply not reasonable. The server will always need to be able to summarily discard locks.
            pjones Peter Jones added a comment -

            Landed for 2.8

            pjones Peter Jones added a comment - Landed for 2.8

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14931/
            Subject: LU-6529 ldlm: reclaim granted locks defensively
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: fe60e0135ee2334440247cde167b707b223cf11d

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14931/ Subject: LU-6529 ldlm: reclaim granted locks defensively Project: fs/lustre-release Branch: master Current Patch Set: Commit: fe60e0135ee2334440247cde167b707b223cf11d

            Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/14931
            Subject: LU-6529 ldlm: reclaim granted locks defensively
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 3e12dd88fd850ab2afe4b0e6215cae7bea136729

            gerrit Gerrit Updater added a comment - Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/14931 Subject: LU-6529 ldlm: reclaim granted locks defensively Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 3e12dd88fd850ab2afe4b0e6215cae7bea136729

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14856/
            Subject: LU-6529 ldlm: cancel aged locks for LRUR
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 0356990876308f811cc6c61c22946a1cd73e5c23

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/14856/ Subject: LU-6529 ldlm: cancel aged locks for LRUR Project: fs/lustre-release Branch: master Current Patch Set: Commit: 0356990876308f811cc6c61c22946a1cd73e5c23

            There will be two patches for the short-term solution, the first one has been posted, I'm working on the second one, most coding is done, will push it soon.

            niu Niu Yawei (Inactive) added a comment - There will be two patches for the short-term solution, the first one has been posted, I'm working on the second one, most coding is done, will push it soon.

            Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/14856
            Subject: LU-6529 ldlm: cancel aged locks for LRUR
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: b97f9efbb700f04b9a355e6ac4cbfc751d8ab411

            gerrit Gerrit Updater added a comment - Niu Yawei (yawei.niu@intel.com) uploaded a new patch: http://review.whamcloud.com/14856 Subject: LU-6529 ldlm: cancel aged locks for LRUR Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: b97f9efbb700f04b9a355e6ac4cbfc751d8ab411
            niu Niu Yawei (Inactive) added a comment - - edited

            Niu, I fully agree that having a client-side max lock age is very useful, and there already is such a thing - lru_max_age. One problem is that the default age is 10h, which is much too long. It should be at most 65min - just long enough to keep files around for checkpoints every hour. If that age is not being used in the LRUR policy, then it should be.

            I agree, 10h is too long, and now it's only used for AGED policy (when lru_resize not enabled). I'll add it for LRUR.

            Niu, as for the LDLM pool notification, I think a DLM blocking callback on some "NULL" lock (e.g. resource 0,0,0,0) with a count and/or LV might be easily implemented as a mechanism to indicate to a client to "cancel some lock(s)", but this would need a protocol change.

            Yes, I think we may need this in the long-term solution.

            I am not sure that the SLV concept can ever really work if it only adjusts SLV as a result of calls from the shrinker. By the time that the kernel knows that is needs to reclaim memory it is too late to impose soft, voluntary lock reductions on the clients. When the kernel needs memory we need to take the most direct actions that we can to free the memory as soon as possible.

            Totally agreed, that's why I want to decouple the SLV mechanism with shrinker.

            For SLV to work reasonably at all, it would likely need to be actively monitoring system memory usage and adjusting the value to constrain aggregate lock usage well in advance of the shrinker demanding memory.

            Yes, I think that was the design goal of SLV, but looks it didn't work as expected.

            Now looks the conclusion are:

            Short-term solution:

            • Adding tunable limit on server side, identify unused locks then revoke them when lock count reaching the limit.
            • Adding maximum lock age on client side for the LRUR policy.

            Long-term solution:

            • Adding proc entries to monitor LV changes and statistics of locks being cancelled by LRUR.
            • Identify problems in the LV mechanism and fix them.
            • Change the server lock shrinker to notify clients instantly (no SLV involved).

            Let's start with the short-term solution.

            niu Niu Yawei (Inactive) added a comment - - edited Niu, I fully agree that having a client-side max lock age is very useful, and there already is such a thing - lru_max_age. One problem is that the default age is 10h, which is much too long. It should be at most 65min - just long enough to keep files around for checkpoints every hour. If that age is not being used in the LRUR policy, then it should be. I agree, 10h is too long, and now it's only used for AGED policy (when lru_resize not enabled). I'll add it for LRUR. Niu, as for the LDLM pool notification, I think a DLM blocking callback on some "NULL" lock (e.g. resource 0,0,0,0) with a count and/or LV might be easily implemented as a mechanism to indicate to a client to "cancel some lock(s)", but this would need a protocol change. Yes, I think we may need this in the long-term solution. I am not sure that the SLV concept can ever really work if it only adjusts SLV as a result of calls from the shrinker. By the time that the kernel knows that is needs to reclaim memory it is too late to impose soft, voluntary lock reductions on the clients. When the kernel needs memory we need to take the most direct actions that we can to free the memory as soon as possible. Totally agreed, that's why I want to decouple the SLV mechanism with shrinker. For SLV to work reasonably at all, it would likely need to be actively monitoring system memory usage and adjusting the value to constrain aggregate lock usage well in advance of the shrinker demanding memory. Yes, I think that was the design goal of SLV, but looks it didn't work as expected. Now looks the conclusion are: Short-term solution: Adding tunable limit on server side, identify unused locks then revoke them when lock count reaching the limit. Adding maximum lock age on client side for the LRUR policy. Long-term solution: Adding proc entries to monitor LV changes and statistics of locks being cancelled by LRUR. Identify problems in the LV mechanism and fix them. Change the server lock shrinker to notify clients instantly (no SLV involved). Let's start with the short-term solution.

            People

              niu Niu Yawei (Inactive)
              morrone Christopher Morrone (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: