Details

    • New Feature
    • Resolution: Fixed
    • Minor
    • Lustre 2.6.0
    • None
    • 8963

    Description

      NRS (Network Request Scheduler) enables the services to schedule the RPCs in different manners. And there have been a bunch of policies implemented over the main framework. Most of them are aimed at improving throughput rate or similar purposes. But we are trying to implement policies for a differnt kind of purpose, QoS.

      The TBF (Token Bucket Filter) is one of the policies that we implemented for traffic control. It enforces a RPC rate limit on every client according to the NID. The handling of a RPC will be delayed until there are enough tokens for the client. Different clients are scheduled according to their deadlines, so that none of them will be starving even though the service does not have the ability to satisfy all the RPC rate requirments of clients. The RPCs from the the same clients are queued in a FIFO manner.)

      Early tests show that the policy works to enforce the RPC rate limit. But more tests, bechmarks and analyses is needed for its correctness and efficiency.

      Attachments

        Issue Links

          Activity

            [LU-3558] NRS TBF policy for QoS purposes

            Can this ticket now be closed since we have LUDOC-221 to track the manual updates?

            jlevi Jodi Levi (Inactive) added a comment - Can this ticket now be closed since we have LUDOC-221 to track the manual updates?

            I filed LUDOC-221 to track the documentation update for the TBF feature.

            adilger Andreas Dilger added a comment - I filed LUDOC-221 to track the documentation update for the TBF feature.

            No problem! We will submit a manual update soon. Thanks!

            lixi Li Xi (Inactive) added a comment - No problem! We will submit a manual update soon. Thanks!
            adilger Andreas Dilger added a comment - - edited

            The patch http://review.whamcloud.com/6901 was landed to master for 2.6.

            This functionality also needs an update to the manual to explain what this feature does, and how to use it. Please see https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manual. Please submit an LUDOC jira ticket to track the manual update, and link it here.

            adilger Andreas Dilger added a comment - - edited The patch http://review.whamcloud.com/6901 was landed to master for 2.6. This functionality also needs an update to the manual to explain what this feature does, and how to use it. Please see https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manual . Please submit an LUDOC jira ticket to track the manual update, and link it here.

            OK, thanks! We hope popole can get QoS function with Lustre sooner and we want it as well!

            ihara Shuichi Ihara (Inactive) added a comment - OK, thanks! We hope popole can get QoS function with Lustre sooner and we want it as well!

            No disagreement from me But I'm not a technical person - I just like the capability that TBF provides.

            So we'll have to get technical people to review this.

            Thanks!

            laytonjb Jeff Layton (Inactive) added a comment - No disagreement from me But I'm not a technical person - I just like the capability that TBF provides. So we'll have to get technical people to review this. Thanks!

            Hmm.. the question from us, why not included in 2.6 or even 2.5.1 yet? The original discussion with Peter, this is not core component of Lustre, it could be landed in 2.5. or 2.4.x even. But, review didn't finish before 2.5 release.
            After that, we got at least multiple inspection pass from multiple people, but rebase was needed again and again, then it needed review again.

            I would request review this quickly again and we want to land this in 2.6 and 2.5.1...

            ihara Shuichi Ihara (Inactive) added a comment - Hmm.. the question from us, why not included in 2.6 or even 2.5.1 yet? The original discussion with Peter, this is not core component of Lustre, it could be landed in 2.5. or 2.4.x even. But, review didn't finish before 2.5 release. After that, we got at least multiple inspection pass from multiple people, but rebase was needed again and again, then it needed review again. I would request review this quickly again and we want to land this in 2.6 and 2.5.1...

            It's been a few months since the last entry. I wanted to ask if this idea/patch is worthy of further work for inclusion in 2.7? Thanks!

            laytonjb Jeff Layton (Inactive) added a comment - It's been a few months since the last entry. I wanted to ask if this idea/patch is worthy of further work for inclusion in 2.7? Thanks!

            I believe your analysis of case #6 is correct - the client only has a limited number of RPCs in flight for each RPC (see also the LNET "peer credits" tunable). If the u500 IOs are blocked behind the u0 IOs, they will be limited by the slower process. This may not be strictly related to the RPCs, but rather to the higher-level RPC engine that is trying to balance IO submission between objects, and doesn't know about the NRS ordering on the server.

            The first question is whether this is a use case that is important for real users? I'm not sure if there is an easy solution for how to handle this from the server.

            adilger Andreas Dilger added a comment - I believe your analysis of case #6 is correct - the client only has a limited number of RPCs in flight for each RPC (see also the LNET "peer credits" tunable). If the u500 IOs are blocked behind the u0 IOs, they will be limited by the slower process. This may not be strictly related to the RPCs, but rather to the higher-level RPC engine that is trying to balance IO submission between objects, and doesn't know about the NRS ordering on the server. The first question is whether this is a use case that is important for real users? I'm not sure if there is an easy solution for how to handle this from the server.

            The NID based TBF policy works well. But we found a problem of JobID based TBF policy and have to ask for help.

            The JobID based TBF policy classifies RPCs according to the Job Stat informantion of each RPC. The simplest Job Stat informantion is 'procname_uid' which can be enabled by 'lctl conf_param server1.sys.jobid_var=procname_uid'. With TBF policy, we are able to set rate limits to different kinds of RPCs. We set the RPC rate of 'dd.0' to 1 RPC/s and the RPC rate 'dd.500' to 1000 RPC/s. If TBF policy works well, when root user ran 'dd' command, an OSS service partition will never handle more than 1 RPC of it. And when user 500 ran 'dd' command, an OSS service partition will never handle more than 1000 RPC of it. Actually this works well except following condition.

            When we ran 'dd' using user root and user 500 at the same time, on the same client, writing to the same OST, the performance of user 500 will decline dramatically, i.e. the performance of user 500 is highly affected by the user root.

            Here is the result that we got running following command.
            dd if=/dev/zero of=/mnt/lustre/fileX bs=1048576 count=XXXX

            1. When user 500 ran 'dd' alone, the performance is about 80 MB/s. This is normal because the OSS's performance has an upper limit of about 80 MB/s

            2. When user root ran 'dd' alone, the performance is about 2 MB/s. This is normal too, because the OSS has two partition and each has a limit of 1 RPC/s. 1 MB/RPC * 1 RPC/s * 2 = 2 MB/s

            3. When user root ran 'dd', and user 500 ran 'dd' on another client, user 500 will get performance of about 80 MB/s and user root will get performance of about 2 MB/s. Please not that different processes writes to differnt files. No matter what the stripes of the files are, we get similar results. There are expected normal results.

            4. When user root ran 'dd', and user 500 ran 'dd' on another client,
            user 500 will get performance of about 80 MB/s and user root will get 2 MB/s. That's normal too.

            5. When user root ran 'dd', and user 500 ran 'dd' on the same client, but they write to different OSTs (i.e. the stripe indexes of these files are different), user 500 will get performance of about 80 MB/s and user root will get 2 MB/s. That's normal too.

            6. When user root ran 'dd', and user 500 ran 'dd' on the same client, and they write to the same OST (i.e. the stripe indexes of these files are the same), the performance of user 500 will declines to about 2 MB/s when user root is writing too. The performance of user 500 will go up immediately to 80 MB/s after user root completes its writing.

            The result 6 is really strange. We think it is not likely that server side codes cause the problem since result 4 is normal. And result 5 implies that it is the OSC ranther than the OSS throttles RPC rate wrongly. Maybe when some RPCs from an OSC are appending, the OSC does not send any more RPCs? I guess maybe some mechanisms of OSC make it works like this, e.g. max RPC in flight limit? I've tried to enlarge max_rpc_in_flight argument of OSCs but got no luck.

            Any suggestions you could provide to us would be greatly appreciated! Thank you in advance!

            lixi Li Xi (Inactive) added a comment - The NID based TBF policy works well. But we found a problem of JobID based TBF policy and have to ask for help. The JobID based TBF policy classifies RPCs according to the Job Stat informantion of each RPC. The simplest Job Stat informantion is 'procname_uid' which can be enabled by 'lctl conf_param server1.sys.jobid_var=procname_uid'. With TBF policy, we are able to set rate limits to different kinds of RPCs. We set the RPC rate of 'dd.0' to 1 RPC/s and the RPC rate 'dd.500' to 1000 RPC/s. If TBF policy works well, when root user ran 'dd' command, an OSS service partition will never handle more than 1 RPC of it. And when user 500 ran 'dd' command, an OSS service partition will never handle more than 1000 RPC of it. Actually this works well except following condition. When we ran 'dd' using user root and user 500 at the same time, on the same client, writing to the same OST, the performance of user 500 will decline dramatically, i.e. the performance of user 500 is highly affected by the user root. Here is the result that we got running following command. dd if=/dev/zero of=/mnt/lustre/fileX bs=1048576 count=XXXX 1. When user 500 ran 'dd' alone, the performance is about 80 MB/s. This is normal because the OSS's performance has an upper limit of about 80 MB/s 2. When user root ran 'dd' alone, the performance is about 2 MB/s. This is normal too, because the OSS has two partition and each has a limit of 1 RPC/s. 1 MB/RPC * 1 RPC/s * 2 = 2 MB/s 3. When user root ran 'dd', and user 500 ran 'dd' on another client, user 500 will get performance of about 80 MB/s and user root will get performance of about 2 MB/s. Please not that different processes writes to differnt files. No matter what the stripes of the files are, we get similar results. There are expected normal results. 4. When user root ran 'dd', and user 500 ran 'dd' on another client, user 500 will get performance of about 80 MB/s and user root will get 2 MB/s. That's normal too. 5. When user root ran 'dd', and user 500 ran 'dd' on the same client, but they write to different OSTs (i.e. the stripe indexes of these files are different), user 500 will get performance of about 80 MB/s and user root will get 2 MB/s. That's normal too. 6. When user root ran 'dd', and user 500 ran 'dd' on the same client, and they write to the same OST (i.e. the stripe indexes of these files are the same), the performance of user 500 will declines to about 2 MB/s when user root is writing too. The performance of user 500 will go up immediately to 80 MB/s after user root completes its writing. The result 6 is really strange. We think it is not likely that server side codes cause the problem since result 4 is normal. And result 5 implies that it is the OSC ranther than the OSS throttles RPC rate wrongly. Maybe when some RPCs from an OSC are appending, the OSC does not send any more RPCs? I guess maybe some mechanisms of OSC make it works like this, e.g. max RPC in flight limit? I've tried to enlarge max_rpc_in_flight argument of OSCs but got no luck. Any suggestions you could provide to us would be greatly appreciated! Thank you in advance!

            Would it be possible to get the fix-version set to 2.5? We want to make sure it doesn't slip off the radar at the last minute or anything. I think that's the procedure we talked about on the CDWG call last week.

            Thanks.

            kitwestneat Kit Westneat (Inactive) added a comment - Would it be possible to get the fix-version set to 2.5? We want to make sure it doesn't slip off the radar at the last minute or anything. I think that's the procedure we talked about on the CDWG call last week. Thanks.

            People

              laisiyao Lai Siyao
              lixi Li Xi (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: