[LU-3558] NRS TBF policy for QoS purposes Created: 05/Jul/13  Updated: 26/Sep/18  Resolved: 07/Mar/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: Lustre 2.6.0

Type: New Feature Priority: Minor
Reporter: Li Xi (Inactive) Assignee: Lai Siyao
Resolution: Fixed Votes: 0
Labels: patch, ptr

Attachments: Microsoft Word NRS-initial-test-result.xlsx     PDF File TBF-design-1.0.pdf    
Issue Links:
Blocker
is blocking LU-11431 Global QoS management based on TBF Closed
Related
is related to LUDOC-328 documentation updates for complex TBF... Open
is related to LU-8008 Can't enable or add rules to TBF Resolved
is related to LU-5620 nrs tbf policy based on opcode Resolved
is related to LU-7470 Extend TBF policy with NID/JobID expr... Resolved
is related to LU-9228 Hard TBF Token Compensation under con... Resolved
is related to LU-3266 Regression tests for NRS policies Resolved
is related to LU-8006 Specify ordering of TBF policy rules Resolved
is related to LU-8236 Wild-card in jobid TBF rule Resolved
is related to LU-5717 Dead lock of nrs_tbf_timer_cb Resolved
is related to LU-6668 Add tests for TBF Resolved
is related to LU-5379 Get error when has many rules in nrs ... Resolved
is related to LU-9227 Changing rate of a TBF rule loses con... Resolved
is related to LU-5580 Switch between 'JOBID' and 'NID' dire... Resolved
is related to LU-5620 nrs tbf policy based on opcode Resolved
is related to LUDOC-221 Document Token Bucket Filter (TBF) NR... Closed
is related to LU-4586 build failure in nrs_tbf_ctl() Resolved
Rank (Obsolete): 8963

 Description   

NRS (Network Request Scheduler) enables the services to schedule the RPCs in different manners. And there have been a bunch of policies implemented over the main framework. Most of them are aimed at improving throughput rate or similar purposes. But we are trying to implement policies for a differnt kind of purpose, QoS.

The TBF (Token Bucket Filter) is one of the policies that we implemented for traffic control. It enforces a RPC rate limit on every client according to the NID. The handling of a RPC will be delayed until there are enough tokens for the client. Different clients are scheduled according to their deadlines, so that none of them will be starving even though the service does not have the ability to satisfy all the RPC rate requirments of clients. The RPCs from the the same clients are queued in a FIFO manner.)

Early tests show that the policy works to enforce the RPC rate limit. But more tests, bechmarks and analyses is needed for its correctness and efficiency.



 Comments   
Comment by Li Xi (Inactive) [ 05/Jul/13 ]

Here is the patch.
http://review.whamcloud.com/#/c/6901/

Comment by Andreas Dilger [ 10/Jul/13 ]

Description of Token Bucket Filter - http://en.wikipedia.org/wiki/Token_bucket_filter

It would also be useful to test TBF in conjunction with the ORR NRS policy, so that RPCs from clients are sorted before IO and have a better chance to have a more optimal ordering when submitted to the backing storage.

Before this can be landed, there will need to be a much better description of how this policy is used, and the performance results. As well, an update is needed for the Lustre Manual with details of how to use the policy and set limits for the NIDs.

Comment by Peter Jones [ 10/Jul/13 ]

Lai

Could you please review the supplied patch and offer advise as appropriate

Thanks

Peter

Comment by Nathan Rutman [ 10/Jul/13 ]

Cool!
How does the deadline scheduling interact with the tokens in the case of a conflict? I.e. if there are not enough tokens yet the deadline is imminent?

Andreas, I just realized a possible side effect of any NRS policy is that it may oddly affect adaptive timeouts by skewing the measured RPC processing time to the maximum delay induced by the policy. I suppose the worst fallout from this would be slower recovery, so maybe not so horrible.

Comment by Li Xi (Inactive) [ 12/Jul/13 ]

Hi Andreas,

The policy throttles RPC based on TBF algorithm. But it schedules the handling of RPC between different NIDs according to their deadline, so it looks more like a CRR-N policy rather than ORR policy. Yes, we are trying change the policy in order to limit RPC rate of different users/groups/jobs. But I havn't got any idea about how to conjunct it with the ORR NRS policy. Any advice?

Sure, we are running benchmarks and writing description about it. The test results and documents will come along with code improvement soon.

Comment by Li Xi (Inactive) [ 12/Jul/13 ]

Hi Nathan,

The current codes does not consider deadline of a RPC yet. But yeah, I think more cases should be tested to make sure it is not a big problem.

Comment by Nathan Rutman [ 12/Jul/13 ]

it schedules the handling of RPC between different NIDs according to their deadline

current codes does not consider deadline of a RPC

I am having trouble reconciling those two statements. Does the first refer to a different kind of deadline?

Comment by Andreas Dilger [ 13/Jul/13 ]

My preference for the long term is that we have a single "super" NRS policy that does many things at one time, or we use layers of NRS policies at the same time to achieve the optimum result.

In the case of TBF it would be possible to have it provide QOS guarantees be giving out credits to exports in an unfair manner, so clients with guaranteed bandwidth will get it, and clients with bandwidth limits will be throttled as needed.

Combining TBF and ORR seems possible, either by using TBF as a first layer filter to avoid massive client unfairness, passing the "unthrottled" requests through to the ORR filter to sort it within and between objects.

Comment by Li Xi (Inactive) [ 13/Jul/13 ]

Hi Nathan,

Yes, you are right. The deadlines in the two statements mean different.

The former one refers to a time point that one of the NID's RPCs should be handled in order to achieve the deserved RPC rate. If the deadline is missed, nothing bad will happen, except the RPC rate is slower than the NID expects.

The latter one refers to traditional deadline of a RPC.

Sorry for my misusing.

Comment by Li Xi (Inactive) [ 13/Jul/13 ]

Hi Andreas,

That sounds really interesting. But in order to implement multi-layered policies, I guess the framework of NRS needs a lot of changes, right?

Comment by Shuichi Ihara (Inactive) [ 14/Jul/13 ]

This is an inital benchmark resutls we did a couple of weeks ago. Tested on 16 clients with various token policy for few clients. We will test again on the latest codes and push results on here later.

Comment by Shuichi Ihara (Inactive) [ 29/Jul/13 ]

This is initial version of Lustre NRS TBF design document.

Comment by Li Xi (Inactive) [ 16/Aug/13 ]

The current Lustre NRS TBF codes change a lot on this basis of the last version. First, we optimize the main framework of TBF policy, which makes it easier to add new supports, e.g. Job ID support, UID/GID support. Based on that, we add the support of Job ID. Now we can utilize Job Stat mechanism to set limit to different jobs. And ofcourse, we fix some defects of TBF codes too.

Comment by Kit Westneat (Inactive) [ 21/Aug/13 ]

Would it be possible to get the fix-version set to 2.5? We want to make sure it doesn't slip off the radar at the last minute or anything. I think that's the procedure we talked about on the CDWG call last week.

Thanks.

Comment by Li Xi (Inactive) [ 04/Sep/13 ]

The NID based TBF policy works well. But we found a problem of JobID based TBF policy and have to ask for help.

The JobID based TBF policy classifies RPCs according to the Job Stat informantion of each RPC. The simplest Job Stat informantion is 'procname_uid' which can be enabled by 'lctl conf_param server1.sys.jobid_var=procname_uid'. With TBF policy, we are able to set rate limits to different kinds of RPCs. We set the RPC rate of 'dd.0' to 1 RPC/s and the RPC rate 'dd.500' to 1000 RPC/s. If TBF policy works well, when root user ran 'dd' command, an OSS service partition will never handle more than 1 RPC of it. And when user 500 ran 'dd' command, an OSS service partition will never handle more than 1000 RPC of it. Actually this works well except following condition.

When we ran 'dd' using user root and user 500 at the same time, on the same client, writing to the same OST, the performance of user 500 will decline dramatically, i.e. the performance of user 500 is highly affected by the user root.

Here is the result that we got running following command.
dd if=/dev/zero of=/mnt/lustre/fileX bs=1048576 count=XXXX

1. When user 500 ran 'dd' alone, the performance is about 80 MB/s. This is normal because the OSS's performance has an upper limit of about 80 MB/s

2. When user root ran 'dd' alone, the performance is about 2 MB/s. This is normal too, because the OSS has two partition and each has a limit of 1 RPC/s. 1 MB/RPC * 1 RPC/s * 2 = 2 MB/s

3. When user root ran 'dd', and user 500 ran 'dd' on another client, user 500 will get performance of about 80 MB/s and user root will get performance of about 2 MB/s. Please not that different processes writes to differnt files. No matter what the stripes of the files are, we get similar results. There are expected normal results.

4. When user root ran 'dd', and user 500 ran 'dd' on another client,
user 500 will get performance of about 80 MB/s and user root will get 2 MB/s. That's normal too.

5. When user root ran 'dd', and user 500 ran 'dd' on the same client, but they write to different OSTs (i.e. the stripe indexes of these files are different), user 500 will get performance of about 80 MB/s and user root will get 2 MB/s. That's normal too.

6. When user root ran 'dd', and user 500 ran 'dd' on the same client, and they write to the same OST (i.e. the stripe indexes of these files are the same), the performance of user 500 will declines to about 2 MB/s when user root is writing too. The performance of user 500 will go up immediately to 80 MB/s after user root completes its writing.

The result 6 is really strange. We think it is not likely that server side codes cause the problem since result 4 is normal. And result 5 implies that it is the OSC ranther than the OSS throttles RPC rate wrongly. Maybe when some RPCs from an OSC are appending, the OSC does not send any more RPCs? I guess maybe some mechanisms of OSC make it works like this, e.g. max RPC in flight limit? I've tried to enlarge max_rpc_in_flight argument of OSCs but got no luck.

Any suggestions you could provide to us would be greatly appreciated! Thank you in advance!

Comment by Andreas Dilger [ 04/Sep/13 ]

I believe your analysis of case #6 is correct - the client only has a limited number of RPCs in flight for each RPC (see also the LNET "peer credits" tunable). If the u500 IOs are blocked behind the u0 IOs, they will be limited by the slower process. This may not be strictly related to the RPCs, but rather to the higher-level RPC engine that is trying to balance IO submission between objects, and doesn't know about the NRS ordering on the server.

The first question is whether this is a use case that is important for real users? I'm not sure if there is an easy solution for how to handle this from the server.

Comment by Jeff Layton (Inactive) [ 08/Jan/14 ]

It's been a few months since the last entry. I wanted to ask if this idea/patch is worthy of further work for inclusion in 2.7? Thanks!

Comment by Shuichi Ihara (Inactive) [ 08/Jan/14 ]

Hmm.. the question from us, why not included in 2.6 or even 2.5.1 yet? The original discussion with Peter, this is not core component of Lustre, it could be landed in 2.5. or 2.4.x even. But, review didn't finish before 2.5 release.
After that, we got at least multiple inspection pass from multiple people, but rebase was needed again and again, then it needed review again.

I would request review this quickly again and we want to land this in 2.6 and 2.5.1...

Comment by Jeff Layton (Inactive) [ 08/Jan/14 ]

No disagreement from me But I'm not a technical person - I just like the capability that TBF provides.

So we'll have to get technical people to review this.

Thanks!

Comment by Shuichi Ihara (Inactive) [ 08/Jan/14 ]

OK, thanks! We hope popole can get QoS function with Lustre sooner and we want it as well!

Comment by Andreas Dilger [ 13/Jan/14 ]

The patch http://review.whamcloud.com/6901 was landed to master for 2.6.

This functionality also needs an update to the manual to explain what this feature does, and how to use it. Please see https://wiki.hpdd.intel.com/display/PUB/Making+changes+to+the+Lustre+Manual. Please submit an LUDOC jira ticket to track the manual update, and link it here.

Comment by Li Xi (Inactive) [ 14/Jan/14 ]

No problem! We will submit a manual update soon. Thanks!

Comment by Andreas Dilger [ 04/Feb/14 ]

I filed LUDOC-221 to track the documentation update for the TBF feature.

Comment by Jodi Levi (Inactive) [ 06/Mar/14 ]

Can this ticket now be closed since we have LUDOC-221 to track the manual updates?

Comment by Andreas Dilger [ 07/Mar/14 ]

Patch has landed for 2.6.0, using LUDOC-221 for tracking the remaining work on the manual.

Generated at Sat Feb 10 01:34:56 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.