[LU-4215] Some expected improvements for OUT Created: 06/Nov/13 Updated: 24/Jan/22 |
|
| Status: | Open |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.6.0 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Minor |
| Reporter: | nasf (Inactive) | Assignee: | Alex Zhuravlev |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | LMR | ||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||
| Severity: | 3 | ||||||||||||||||||||||||||||||||||||||||
| Rank (Obsolete): | 11467 | ||||||||||||||||||||||||||||||||||||||||
| Description |
|
1. OUT RPC service threads on MDT and OST using different reply portals confused the OUT RPC user. On MDT-side, it is: .psc_buf = {
.bc_nbufs = MDS_NBUFS,
.bc_buf_size = OUT_BUFSIZE,
.bc_req_max_size = OUT_MAXREQSIZE,
.bc_rep_max_size = OUT_MAXREPSIZE,
.bc_req_portal = OUT_PORTAL,
.bc_rep_portal = MDC_REPLY_PORTAL,
},
On OST-side, it is: .psc_buf = {
.bc_nbufs = OST_NBUFS,
.bc_buf_size = OUT_BUFSIZE,
.bc_req_max_size = OUT_MAXREQSIZE,
.bc_rep_max_size = OUT_MAXREPSIZE,
.bc_req_portal = OUT_PORTAL,
.bc_rep_portal = OSC_REPLY_PORTAL,
},
For the case that both MDT and OST runs on the same physical server node (especially for VM environment testing), when OSP wants to talk with OST via OUT_PORTAL, the OUT RPC maybe handled by MDT-side OUT RPC service thread unexpected, and replied via MDC_REPLY_PORTAL, instead of OSC_REPLY_PORTAL on which the OSP is waiting for the reply. Then caused the OSP-side OUT RPC timeout and resend again and again. The bad case also can happen when OSP wants to talk with MDT via OUT_PORTAL. Because NDE I has already used the OUT RPC for talking among MDTs. To be compatible with the old version, we cannot change the MDT-side OUT RPC reply portal. So we have to chance OST-side OUT RPC reply portal to "MDC_REPLY_PORTAL". But it is strange for OST-side to use MDT-side reply portal. 2. The OUT RPC version is fixed on "LUSTRE_MDS_VERSION", in spite of the RPC is to MDT or to OST. Also confused others. We can re-define "tgt_out_handlers". But it may break the policy of Unified Target. 3. Pack multiple idempotent sub-requests into single OUT RPC. In general, the OUT RPC should not assume that the sub-requests are related with each other. So even if one sub-request failed to be executed, the others should not be ignored. But in current implementation, it is not. If the other sub-requests are not related with the failed one, then such behavior is unexpected. Unfortunately, it is not easy to judge whether one sub-request is related with the others within current OUT request format, especially consider to be compatible with DNE I. 4. Iteration via OUT. I found some client-side iteration framework in osp_md_object.c, but seems no server side handler. Do we have any plan to support that? |
| Comments |
| Comment by Andreas Dilger [ 06/Nov/13 ] |
|
For #3 there is the idea of "batchid" in the OUT request structure: struct update {
__u32 u_type;
__u32 u_batchid;
struct lu_fid u_fid;
__u32 u_lens[UPDATE_BUF_COUNT];
__u32 u_bufs[0];
};
This allows the batched request to combine multiple updates into a single transaction by using the same "u_batchid", and updates with different "u_batchid" may be put into a separate transaction. I think that Di has some patch to change the OUT protocol a bit, though it doesn't really change the above semantic of using u_batchid to decide which updates belong in the same transaction. I'm not sure which patch of his this is, but it is intended to allow passing the master transno as the u_batchid. This would make the OUT protocol incompatible with older servers, but since it is currently only used between the MDTs this shouldn't be a big problem (they would need to be updated at the same time anyway). |
| Comment by Alex Zhuravlev [ 06/Nov/13 ] |
|
I also would like to make minor changes to the protocol:
|
| Comment by Andreas Dilger [ 06/Mar/14 ] |
|
Di, Alex, can you please comment whether this bug can be closed? I think many of the improvements discussed here for the update RPC format were landed to master via http://review.whamcloud.com/7128 " Are there more changes that are still needed (which would be best to do in 2.6 while the protocol can be easily changed) or can it be closed? |
| Comment by Di Wang [ 06/Mar/14 ] |
|
Andreas, nasf, only 1 and 4 are resolved right now. 1 has been landed to master. and 4 will be resolved in 2 can be fixed in 2.6 definitely, IMHO. |
| Comment by nasf (Inactive) [ 07/Mar/14 ] |
|
For requirement 3, if we want to support packaged attr_get/xattr_get on multiple OST-objects via single OUT RPC, then we need to continue the OUT RPC handling even if some sub-requests failed. For example, the MDT (via OSP) wants to attr_get/xattr_get on both OST-object1 and OST-object2 via single OUT RPC, and it does not whether the two targets exists or not; so on the OST-side, it should not skip the OST-object2 even if OST-object1 does not exist. Currently, because we do not support that well yet, the MDT (via OSP) only pack the sub-requests which belong to the same target object is the OUT RPC. Once we improvement it, we can consider to make the OUT RPC to be more efficient. |
| Comment by Alex Zhuravlev [ 07/Mar/14 ] |
|
yes, the ability to continue processing in case of error is important for batched DESTROY's, for example. as for efficiency, I think we should not use obdo - it's huge, instead we should probably be able to get/set just a subset of attributes like dt_attr_set() allows. |
| Comment by Andreas Dilger [ 25/Apr/14 ] |
|
Alex, is there a chance for you to work on patches for 2.6 for the #2 and #3 items? Di already has far too many 2.6 blocker bugs to work on this, so if we want these changes then you are the best candidate to do the work. |
| Comment by Alex Zhuravlev [ 28/Apr/14 ] |
|
Andreas, yes. |
| Comment by Alex Zhuravlev [ 29/Apr/14 ] |
|
Di, could you clarify on #2 a bit please? |
| Comment by Di Wang [ 06/May/14 ] |
|
Hmm, I think #2 means we also pack OUT RPC with LUSTRE_MDS_VERSION (see out_prep_update_req), no matter this OUT RPC will be sent to MDS or OST. Right now, DNE only send out RPC to another MDS, but for LFSCK, I assume some OUT RPC needs to be sent to OST. So I think this is the one needs to be fixed. Though I guess the request is from LFSCK project, probably Fan Yong can confirm. |
| Comment by nasf (Inactive) [ 07/May/14 ] |
|
Currently, LFSCK uses OUT RPC to talk with OST via OSP, it shares the interface out_prep_update_req() with the RPC to/from MDT. Inside such function, it always uses LUSTRE_MDS_VERSION in spite of whether it is for OST or MDT, which is confused. |
| Comment by Andreas Dilger [ 30/May/14 ] |
|
It seems #3 is the only item still outstanding. Is the code to handle batched requests working? |
| Comment by nasf (Inactive) [ 30/May/14 ] |
|
The code for batched requests has worked since DNE 1. The trouble is that the handling for the batched requests within single OUT RPC will stop when it hits failure at some of the sub-request and the left sub-requests will be ignored even though they are not related with failed one. (that is the #3) |
| Comment by Andreas Dilger [ 07/Oct/14 ] |
|
Di, Nasf, what is the status on fixing this last issue? What is the proposed solution? Should the server mark all later batchids as failed, or should it try to execute them? What if they are dependent on each other? Is there a flag that could be set on the batch that indicates if it should be executed even if the previous batch failed? |
| Comment by Di Wang [ 07/Oct/14 ] |
|
I just checked current master code, which seems not resolved yet, not sure in Nasf's patches. For DNE, it always fail immediately, which is good enough even for DNE2. For LFSCK, is this only for read-only updates like getattr? Hmm, there is padding in OSP update request * Hold object_updates sending to the remote OUT in single RPC */
struct object_update_request {
__u32 ourq_magic;
__u16 ourq_count; /* number of ourq_updates[] */
__u16 ourq_padding;
struct object_update ourq_updates[0];
};
We can add the flag there. |
| Comment by Alex Zhuravlev [ 07/Oct/14 ] |
|
the ability to proceed is important for batched destroys. |
| Comment by nasf (Inactive) [ 08/Oct/14 ] |
|
Because the original master did not support to execute other batchids after the former failed, the OSP (for LFSCK) only aggregates the sub-requests that operate on the same object in the same OUT RPC. So even thought without resolving the batchid issues, the LFSCK still works although it may be inefficient. |
| Comment by Andreas Dilger [ 12/Jan/15 ] |
|
This bug has been dropped from 2.7.0 because there hasn't been any progress on it in several months. Is this going to cause major protocol incompatibility if this is fixed in 2.8.0? If yes, is anyone able to fix the problems in the current code in the next week or so? |
| Comment by nasf (Inactive) [ 13/Jan/15 ] |
|
The left issue is the #3, that is for performance improvement. It is essential for neither LFSCK nor DNE. I am not sure whether Alex or Di has made some patches on that. (I have NOT yet because of other LFSCK tickets). From the LFSCK view, it changed nothing about the OUT protocol. Even if someone will change the OUT protocol for #3 in the future, there will be no LFSCK special trouble. |
| Comment by Alex Zhuravlev [ 30/Sep/15 ] |
|
this improvement is needed to shrink records going to ZIL. the patch mentioned in the bug shrink average record on MDT from 1541 to 407 bytes. |
| Comment by Alex Zhuravlev [ 30/Sep/15 ] |