[LU-4345] failed to update accounting ZAP for user Created: 04/Dec/13 Updated: 06/Jul/16 Resolved: 01/Jun/14 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | Lustre 2.6.0, Lustre 2.5.3 |
| Type: | Bug | Priority: | Critical |
| Reporter: | Christopher Morrone | Assignee: | Niu Yawei (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | llnl, mn4 | ||
| Environment: |
Lustre 2.4.0-19chaos |
||
| Attachments: |
|
||||||||||||||||
| Issue Links: |
|
||||||||||||||||
| Severity: | 3 | ||||||||||||||||
| Rank (Obsolete): | 11907 | ||||||||||||||||
| Description |
|
We are using lustre 2.4.0-19chaos on our servers running with the ZFS OSD. On some of the OSS nodes we are seeing messages like this: Nov 6 00:06:29 stout8 kernel: LustreError: 14909:0:(osd_object.c:973:osd_attr_set()) fsrzb-OST0007: failed to update accounting ZAP for user 132245 (-2) Nov 6 00:06:29 stout8 kernel: LustreError: 14909:0:(osd_object.c:973:osd_attr_set()) Skipped 5 previous similar messages Nov 6 00:06:38 stout16 kernel: LustreError: 15266:0:(osd_object.c:973:osd_attr_set()) fsrzb-OST000f: failed to update accounting ZAP for user 122392 (-2) Nov 6 00:06:38 stout16 kernel: LustreError: 15266:0:(osd_object.c:973:osd_attr_set()) Skipped 3 previous similar messages Nov 6 00:06:40 stout12 kernel: LustreError: 15801:0:(osd_object.c:973:osd_attr_set()) fsrzb-OST000b: failed to update accounting ZAP for user 122708 (-2) Nov 6 00:06:40 stout12 kernel: LustreError: 15801:0:(osd_object.c:973:osd_attr_set()) Skipped 4 previous similar messages Nov 7 00:31:36 porter31 kernel: LustreError: 7704:0:(osd_object.c:973:osd_attr_set()) lse-OST001f: failed to update accounting ZAP for user 54916 (-2) Nov 7 02:53:05 porter19 kernel: LustreError: 9380:0:(osd_object.c:973:osd_attr_set()) lse-OST0013: failed to update accounting ZAP for user 7230 (-2) Dec 3 12:01:21 stout7 kernel: Lustre: Skipped 3 previous similar messages Dec 3 13:52:30 stout4 kernel: LustreError: 15806:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0003: failed to update accounting ZAP for user 1752876224 (-2) Dec 3 13:52:30 stout4 kernel: LustreError: 15806:0:(osd_object.c:967:osd_attr_set()) Skipped 3 previous similar messages Dec 3 13:52:30 stout1 kernel: LustreError: 15324:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0000: failed to update accounting ZAP for user 1752876224 (-2) Dec 3 13:52:30 stout1 kernel: LustreError: 15784:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0000: failed to update accounting ZAP for user 1752876224 (-2) Dec 3 13:52:30 stout14 kernel: LustreError: 16345:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST000d: failed to update accounting ZAP for user 1752876224 (-2) Dec 3 13:52:30 stout12 kernel: LustreError: 32355:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST000b: failed to update accounting ZAP for user 1752876224 (-2) Dec 3 13:52:30 stout2 kernel: LustreError: 15145:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0001: failed to update accounting ZAP for user 1752876224 (-2) Dec 3 13:52:30 stout10 kernel: LustreError: 14570:0:(osd_object.c:967:osd_attr_set()) fsrzb-OST0009: failed to update accounting ZAP for user 1752876224 (-2) First of all, these messages are terrible. If you look at osd_attr_set() there are four exactly identical messages that are printed. Ok, granted, we can look them up by line number. But even better would be to make them unique. So looking them up by line numbers 967 and 973, it would appear that we have hit at least the first two of the "filed to update accounting ZAP for user" messages. Note that the UID numbers do not look correct to me. Many of them are clearly not in the valid UID range. But then I don't completely understand what is going on here yet. |
| Comments |
| Comment by Christopher Morrone [ 04/Dec/13 ] |
|
Can someone please explain the difference between "la->la_uid" and "obj->oo_attr.la_uid" in this function? Both appear to be used as keys in the ZAP, and we increment the value associated with the former and decrement the value associated with the latter. Are either of these values a fake uid? In other words, should these always be correct UID values as one would see in user space? I am just wondering if I should really be assuming that the printed uid values are really incorrect or not. |
| Comment by Peter Jones [ 04/Dec/13 ] |
|
Alex Could you please comment? Thanks Peter |
| Comment by Alex Zhuravlev [ 10/Dec/13 ] |
|
Christopher, one is "existing" uid (which is to be decremented) and another is "new" (which is to be incremented). this way we move object from one user to another one. to clarify - 122392 doesn't look like a correct UID to you? I'd say so, but I don't manage systems with lots of users. |
| Comment by Christopher Morrone [ 10/Dec/13 ] |
|
I don't understand. What kind of accounting is this? And yes, we do not have UIDs outside of the unsigned 16bit range. 122392 is definitely not a valid UID. In fact, none of the numbers given in the messages are valid UIDs (note that not all UIDs are in use in the passwd file). |
| Comment by Alex Zhuravlev [ 10/Dec/13 ] |
|
this is quota accounting. given it's not easy to enable it on an existing system (with lots of objects to scan in flight, etc) we decided to have accounting enforced all the time (same for ldiskfs). IOW, for every backend we do track objects/space. |
| Comment by Niu Yawei (Inactive) [ 07/Jan/14 ] |
|
The accounting ZAP updating failed for -ENOENT could probably caused by that the invalid UID wasn't found in the ZAP? Is there a reproducer? That could be helpful for us to debug and see where the invalid UID come from. |
| Comment by Christopher Morrone [ 21/Jan/14 ] |
|
There is no reproducer known. It just happens randomly in production. |
| Comment by Niu Yawei (Inactive) [ 22/Jan/14 ] |
|
Are there any files in the production system which has such fake uid as owner? Could you apply the debug patch and see if we can get some clue on where the fake uid comes from? Thanks. |
| Comment by Christopher Morrone [ 22/Jan/14 ] |
|
If any files exist with an UID of 1752876224, it is nearly certain that a lustre bug is responsible. I'll work the debug patch into the next update cycle. Sadly, we just missed the last one. |
| Comment by Christopher Morrone [ 07/Feb/14 ] |
|
FYI, this patch went onto an ldiskfs-based filesystem and is tripping the ofs_write_attr_set() message constantly. The debug patch is almost certainly going to have to be backed out. Can you work on something that is a bit more targeted? Does only that one message need to come out or are the others going to be as bad as this? |
| Comment by Christopher Morrone [ 07/Feb/14 ] |
|
By the way, what was the goal of the > 7000 part in the file? Did you think there were no valid UIDs greater than 7000? That is certainly not the case. 7230 was not valid in the above example, but that is a hole in the UID space. We have UIDs into the 50000 range at least. |
| Comment by Niu Yawei (Inactive) [ 07/Feb/14 ] |
|
I updated the debug patch, this time error message will only be printed when uid is greater than 1700000000. |
| Comment by Christopher Morrone [ 31/Mar/14 ] |
|
It looks like we are hitting this on OSS nodes pretty frequently. Here are some examples: 2014-03-31 10:23:46 LustreError: 6113:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 632227776 2014-03-31 10:23:46 LustreError: 6119:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 632227776 2014-03-31 10:23:46 LustreError: 6113:0:(ofd_obd.c:888:ofd_setattr()) Skipped 1 previous similar message 2014-03-31 10:23:46 LustreError: 6113:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 632227776 2014-03-31 10:48:29 LustreError: 6229:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:48:29 LustreError: 6229:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:48:35 LustreError: 6020:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:48:35 LustreError: 6020:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:48:44 LustreError: 6224:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:48:44 LustreError: 6159:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:48:44 LustreError: 6224:0:(ofd_obd.c:888:ofd_setattr()) Skipped 2 previous similar messages 2014-03-31 10:48:48 LustreError: 6159:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:48:48 LustreError: 6159:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:48:48 LustreError: 6159:0:(ofd_objects.c:418:ofd_attr_set()) Skipped 2 previous similar messages 2014-03-31 10:48:52 LustreError: 6307:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:48:52 LustreError: 6002:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:48:52 LustreError: 6307:0:(ofd_obd.c:888:ofd_setattr()) Skipped 1 previous similar message 2014-03-31 10:48:56 LustreError: 6002:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:48:56 LustreError: 6002:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:48:56 LustreError: 6002:0:(ofd_objects.c:418:ofd_attr_set()) Skipped 1 previous similar message 2014-03-31 10:49:05 LustreError: 6159:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 857467328 2014-03-31 10:49:05 LustreError: 6159:0:(ofd_obd.c:888:ofd_setattr()) Skipped 3 previous similar messages 2014-03-31 10:49:05 LustreError: 6119:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 857467328 2014-03-31 10:49:05 LustreError: 6119:0:(ofd_objects.c:418:ofd_attr_set()) Skipped 3 previous similar messages 2014-03-29 18:55:33 LustreError: 5973:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 938789056 2014-03-29 18:55:33 LustreError: 5968:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 938789056 2014-03-29 18:55:33 LustreError: 5973:0:(ofd_obd.c:888:ofd_setattr()) Skipped 1 previous similar message 2014-03-29 18:55:33 LustreError: 5987:0:(ofd_obd.c:888:ofd_setattr()) LU-4345 set uid: 938789056 2014-03-29 18:55:33 LustreError: 5987:0:(ofd_obd.c:888:ofd_setattr()) Skipped 1 previous similar message 2014-03-29 18:55:33 LustreError: 5987:0:(ofd_objects.c:418:ofd_attr_set()) LU-4345 set uid: 938789056 2014-03-29 18:55:33 LustreError: 5968:0:(osd_object.c:974:osd_attr_set()) lse-OST0013: failed to update accounting ZAP for original user 938788544 (-2) Note that I had set the "> 7000" to "> 70000", and also added " |
| Comment by Niu Yawei (Inactive) [ 01/Apr/14 ] |
|
The message shows that all these incorrect ids come from owner/group changing (not from punch or internal owner/group setting on first write), so I think it could be some problematic application changes owner/group unexpectedly? Could you check if there are any files on MDT were changed to such invalid owner/group? (hope they are not temporary file). Another thing is that why it failed to update ZAP? The ZAP should be updated successfully no matter if the uid/gid is invalid, I think we need some zfs expert to enable debug on zfs and see what's going on the ZAP updating. |
| Comment by Alex Zhuravlev [ 02/Apr/14 ] |
|
I wonder if this is a race in zap_increment().. there is no locking between zap_lookup() and zap_remove()/zap_update(). meaning in some case we can try to remove the record already removed concurrently? I'd like to suggest http://review.whamcloud.com/#/c/7157/ but unfortunately that one depends on specific DMU API to be exported. |
| Comment by Niu Yawei (Inactive) [ 05/May/14 ] |
|
I found that osp sometimes could set a random uid/gid to OST object. (when user set uid or gid only). in osp_sync_add_rec(): case MDS_SETATTR64_REC: rc = fid_to_ostid(fid, &osi->osi_oi); LASSERT(rc == 0); osi->osi_hdr.lrh_len = sizeof(osi->osi_setattr); osi->osi_hdr.lrh_type = MDS_SETATTR64_REC; osi->osi_setattr.lsr_oi = osi->osi_oi; LASSERT(attr); osi->osi_setattr.lsr_uid = attr->la_uid; osi->osi_setattr.lsr_gid = attr->la_gid; break; Both uid and gid from attr are saved into the llog without checking if they are all valid. (if LA_UID & LA_GID are both present in attr->la_valid) in osp_sync_new_setattr_job(): body->oa.o_oi = rec->lsr_oi;
body->oa.o_uid = rec->lsr_uid;
body->oa.o_gid = rec->lsr_gid;
body->oa.o_valid = OBD_MD_FLGROUP | OBD_MD_FLID |
OBD_MD_FLUID | OBD_MD_FLGID;
We send both the uid & gid from llog to OST, and tell OST that both uid & gid are valid. (OBD_MD_FLUID & OBD_MD_FLGID) This could probably the cause of random id on OST object, I think we'd store a flag in llog_setattr64_rec to specify which id is valid. Alex, what do you think? |
| Comment by Alex Zhuravlev [ 05/May/14 ] |
|
we don't store "validity" in llog. so I guess the right fix would be to fill missing uid/gid in llog record with current value? |
| Comment by Niu Yawei (Inactive) [ 06/May/14 ] |
You mean get the current ids in lod layer, and pass them to osp by 'attr'? (the 'attr' is 'const') |
| Comment by Niu Yawei (Inactive) [ 06/May/14 ] |
| Comment by Swapnil Pimpale (Inactive) [ 12/May/14 ] |
|
Is there a b2_4 backport of this patch? |
| Comment by Peter Jones [ 12/May/14 ] |
|
Swapnil Not yet. The usual practice is to finalize the form of the patch on master before back porting to earlier branches Peter |
| Comment by Swapnil Pimpale (Inactive) [ 24/May/14 ] |
|
Peter, Could you please provide a b2_4 backport of this patch? We need it at one of our customer sites. Thanks! |
| Comment by Peter Jones [ 24/May/14 ] |
|
Swapnil Sorry if I was not clear previously. Yes, I understand that you would like a b2_4 version of this fix and as soon as we have finalized the form of the fix we will create one Regards Peter |
| Comment by Niu Yawei (Inactive) [ 28/May/14 ] |
| Comment by Swapnil Pimpale (Inactive) [ 28/May/14 ] |
|
Thanks Niu |
| Comment by Andreas Dilger [ 30/May/14 ] |
|
The patch http://review.whamcloud.com/7157 was landed to master and then reverted due to problems. That patch needs to be refreshed. |
| Comment by Jodi Levi (Inactive) [ 01/Jun/14 ] |
|
Follow on work is being tracked in |
| Comment by Christopher Morrone [ 02/Jun/14 ] |
|
So what is the actual state of the reported bug? Is it now fixed, but because http://review.whamcloud.com/7157 was reverted we now have a potential performance regression? Or is the bug not yet fixed? |
| Comment by Andreas Dilger [ 02/Jun/14 ] |
|
Chris, The 7157 patch is to be tracked under |
| Comment by Alex Zhuravlev [ 03/Jun/14 ] |
|
2600 was doing OK on my local system, unfortunately it seem to fail on maloo sometimes. I asked Brian B. to help with understanding the root cause - somehow dnodes are still referenced when I use dsl_sync_task_nowait(). once this sorted out (Brian, please help the important thing is that w/o 2600 accounting is still racy.. |
| Comment by Brian Behlendorf [ 11/Jun/14 ] |
|
Exactly what kind of failure are you seeing? I don't understand what you mean by 'somehow dnodes are still referenced'. Can you point me at a maloo failure which shows the problem or better describe exactly what the issue is. What you're trying to do in the patch looks reasonable to me on the surface, although the whole thing feels racy. |
| Comment by Alex Zhuravlev [ 11/Jun/14 ] |
|
at umount dnodes storing object accounting are still referenced, so dnode_special_close() gets stuck because meta dnode is referenced by those. why do you think the whole thing is racy? |
| Comment by Brian Behlendorf [ 11/Jun/14 ] |
|
Ahh OK. I remember now. It looks to me like you're failing to call dmu_tx_commit() after dsl_sync_task_nowait(). The commit is responsible for dropping all the dnode holds and notifying any waiters. Without it the holds are just going to accumulate and I'd expect to see exactly what you're describing. My suggestion would be to take a good look at the spa_history_log() function in zfs. It's a fairly nice example of how to go about this. In this case they create a tx per history update since there aren't that many of them. In the Lustre case I agree it's probably a good idea to batch them as your doing. However, the commit callback I'd still strongly suggest allowing a dedicated tx for this purpose. I think it would make the code more readable and easy to verify that you've constructed the tx properly. There's a nice comment in include/sys/dmu.h describing what you can and cannot do when constructing a tx. If you break any of those rules you're likely to see some strange problems. /* * You must create a transaction, then hold the objects which you will * (or might) modify as part of this transaction. Then you must assign * the transaction to a transaction group. Once the transaction has * been assigned, you can modify buffers which belong to held objects as * part of this transaction. You can't modify buffers before the * transaction has been assigned; you can't modify buffers which don't * belong to objects which this transaction holds; you can't hold * objects once the transaction has been assigned. You may hold an * object which you are going to free (with dmu_object_free()), but you * don't have to. * * You can abort the transaction before it has been assigned. * * Note that you may hold buffers (with dmu_buf_hold) at any time, * regardless of transaction state. */ The racy comment was just a subjective feeling I had about the code. There's so much non-private state you're depending on it's hard to easily look at the code and convince yourself it's safe. An good example of this is the reuse the the ot_tx. It's hard to know exactly what state the tx is in when you enter the function, and depending on what state that is there are certain things you must not do. If you were to create a new tx here it would be much clearer. |
| Comment by Alex Zhuravlev [ 11/Jun/14 ] |
|
hmm, what I'm trying to do is to hit the same transaction (otherwise accounting is not atomic), but at the point when all the normal changes within this txg are done. - this is the purpose of dsl_sync_task_nowai(), right? and basically that whole state is per txg and it's supposed to be totally non-racy. yet another point is that this is exactly how space accounting works now? (or very similarly) |
| Comment by Alex Zhuravlev [ 11/Jun/14 ] |
|
I had a quick look at spa_history_log() and not sure what's the difference. yes, it does create "own" tx, but how is it really different given all the logic is done in spa_history_log_sync() which is scheduled with dsl_sync_task_nowait() ? in my patch I do have a normal tx created the same normal way, I check this is the first time I see this specific txg, then allocate a structure (which is per-txg essentially) and schedule a function with dsl_sync_task_nowait() which does update couple ZAPs.. seem to be exactly the same? |
| Comment by Brian Behlendorf [ 11/Jun/14 ] |
|
Exactly right. You just need this tx to end up in the same txg. As long as that's true you'll have the atomic guarantee. One failure either both will be on disk or neither will. But you don't need to leverage an existing tx to achieve that. If you know the txg you can use dmu_tx_create_assigned() to ensure your tx is part of that txg. |
| Comment by Brian Behlendorf [ 11/Jun/14 ] |
|
The critical bit is the call to dmu_tx_commit() after the dsl_sync_task_nowait() call. Where does this happen in the Lustre code for this tx? |
| Comment by Alex Zhuravlev [ 11/Jun/14 ] |
|
hmm, osd_trans_stop() does this, otherwise we would block in the very first txg? sort of interesting thing is that the patch getting stuck quite rarely.. it actually passed many maloo runs and it was hard to hit the issue locally as well. |
| Comment by Brian Behlendorf [ 11/Jun/14 ] |
|
Perhaps it's just me. But I find it hard to walk the code and verify that the tx is being constructed correctly. Are we perhaps taking extra holds now? One thing you could try is to build ZFS with with the --enable-debug-dmu-tx option. This will enable a variety of checks to ensure that tx are constructed and managed properly. If they're not you'll ASSERT. They're somewhat expensive so they're disabled by default. As for doing thing like the internal ZFS code quota is managed more like I described above. If you look at dsl_pool_sync() which is called in the sync'ing context like a synctask you'll see that it creates a new tx for the correct txg with dmu_tx_create_assigned(). It then goes through the list of dirty dnodes per dataset and updates the zap's accordingly. I don't see why Lustre couldn't do something similar. |
| Comment by Alex Zhuravlev [ 12/Jun/14 ] |
|
OK, I'll have a closer look at the debug options, but notice that spa_history_log() and spa_history_log_sync don't calls dmu_tx_create_assigned() and this still work? I'll also try to use dmu_tx_create_assigned(), thanks. |
| Comment by Brian Behlendorf [ 12/Jun/14 ] |
|
> spa_history_log() and spa_history_log_sync don't calls dmu_tx_create_assigned() They don't call it because there aren't any atomic requirements for the history. But if you need to make sure all the tx's end up being atomic you'll need dmu_tx_create_assigned() which is what the quota code uses. Even better just use the tx passed to the synctask. This is created in dsl_pool_sync() and will be tied to the txg being synced. |
| Comment by Jian Yu [ 13/Aug/14 ] |
|
Here is the back-ported patch for Lustre b2_5 branch: http://review.whamcloud.com/11435 |