[LU-3421] (ost_handler.c:1762:ost_blocking_ast()) Error -2 syncing data on lock cancel causes first ENOSPC client issues then MDS server locks up Created: 30/May/13 Updated: 18/Sep/14 Resolved: 03/Sep/13 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.4.0 |
| Fix Version/s: | Lustre 2.4.1, Lustre 2.5.0 |
| Type: | Bug | Priority: | Minor |
| Reporter: | James A Simmons | Assignee: | Zhenyu Xu |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | llnl | ||
| Environment: |
RHEL6.4 running with Lutsre 2.4-rc2 and cray clients running [2.4.0-rc1 SLES11 SP1 / 2.4.0-rc2 SLES11 SP2] clients |
||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 8486 | ||||||||
| Description |
|
Several times during our test shot we would encounter a situation where OSTs would report ENOSPC even tho there was enough space and inodes available. In time the MDS would become pinned and it would have to be rebooted to get a working file system again. I have attached the console logs for the MDS and OSS. |
| Comments |
| Comment by Peter Jones [ 30/May/13 ] |
|
Bobijam Could you please look into this one? Thanks Peter |
| Comment by James A Simmons [ 30/May/13 ] |
|
backtrace from the vmcore which will be uploaded soon: PID: 4814 TASK: ffff880e23333500 CPU: 15 COMMAND: "mdt03_140" |
| Comment by Andreas Dilger [ 30/May/13 ] |
|
It looks like there are a couple of different bugs being hit here:
May 29 08:18:40 widow-mds1 kernel: [76013.652071] Lustre: routed1-OST0015-osc-MDT0000: slow creates, last=[0x100150000:0x22041:0x0], next=[0x100150000:0x22041:0x0], reserved=0, syn_changes=2061, syn_rpc_in_progress=1, status=-19 May 29 08:19:32 widow-mds1 kernel: [76065.440057] Lustre: routed1-OST00f0-osc-MDT0000: slow creates, last=[0x100f00000:0x25e93:0x0], next=[0x100f00000:0x25e93:0x0], reserved=0, syn_changes=0, syn_rpc_in_progress=1, status=-19 May 29 08:23:37 widow-mds1 kernel: [76310.325026] LustreError: 6065:0:(osp_precreate.c:484:osp_precreate_send()) routed1-OST002a-osc-MDT0000: can't precreate: rc = -28 May 29 08:23:37 widow-mds1 kernel: [76310.367372] LustreError: 6065:0:(osp_precreate.c:484:osp_precreate_send()) Skipped 70657 previous similar messages There was previously debugging code in osp_precreate_reserve() because I was worried about exactly this kind of situation, but my recent patch http://review.whamcloud.com/6219 (commit dc2bcafd2a0b) removed it before the 2.4.0 release because it would have LBUG'd in this case: @@ -1062,17 +1061,6 @@ int osp_precreate_reserve(const struct lu_env *env, struct osp_device *d) while ((rc = d->opd_pre_status) == 0 || rc == -ENOSPC || rc == -ENODEV || rc == -EAGAIN) { -#if LUSTRE_VERSION_CODE < OBD_OCD_VERSION(2, 3, 90, 0) - /* - * to address Andreas's concern on possible busy-loop - * between this thread and osp_precreate_send() - */ - if (unlikely(count++ == 1000)) { - osp_precreate_timeout_condition(d); - LBUG(); - } -#endif - /* * increase number of precreations */ It probably makes sense to reinstate this if it loops only, say, twice for -ENOSPC, since there are other OSTs that could be used and it doesn't make sense to block the MDS thread for such a long time. It seems to me that the update of osp_pre_status at the start of osp_pre_update_status() is racy. If rc == 0 (i.e. the statfs succeeded) then osp_pre_status = 0, even though it is set to -ENOSPC again later on. It would be better to have something like: if (rc != 0) { d->opd_pre_status = rc; goto out; } Otherwise, opd_pre_status can be changed to 0 and break osp_precreate_ready_condition() checking of -ENOSPC. It also seems like the -ENOSPC hysteresis that is so well described in the comment in osp_pre_update_status() is not actually implemented. When the free space is < 0.1% opd_pre_status = -ENOSPC, but it is immediately cleared if free space is >= 0.1%. It also seems that there is a longstanding bug in the code, with min(used blocks vs. 1 GByte). It seems something like the following is needed: * On very large disk, say 16TB 0.1% will be 16 GB. We don't want to
* lose that amount of space so in those cases we report no space left
* if their is less than 1 GB left, and clear it at 2GB.
if (likely(msfs->os_type != 0)) { /* msfs contains valid data */
used = min((msfs->os_blocks - msfs->os_bfree) >> 10,
1ULL << (30 - msfs->os_bsize);
if (d->opd_pre_status == 0 &&
msfs->os_ffree < 32 || msfs->os_bavail < used) {
d->opd_pre_status = -ENOSPC;
:
:
} else if (d->opd_pre_status == -ENOSPC &&
msgs->os_ffree > 64 && msfs->os_bavail > used * 2) {
d->opd_pre_status = 0;
:
:
}
The opd_pre_status = 0 state should only be set if there is enough space, not only if the STATFS RPC succeeded. |
| Comment by Andreas Dilger [ 30/May/13 ] |
|
James, I expect the stack trace you pointed out is not a primary cause of this problem. That looks just to be an MDS thread that is blocked waiting for a DLM lock that is held by another thread. My guess is that there is something wrong in the precreate code at ENOSPC time that is keeping the MDS threads blocked for too long a time, and this is caused by either really being out of space on the OSTs (either inodes or blocks is the same, since there is no point allocating objects on an OST with no space) or maybe false ENOSPC caused by a problem with the space grants or similar. It does point out that we have something wrong with the usage of the DLM locks, or similar, because we should never be blocked for such a long time holding a spinlock. |
| Comment by James A Simmons [ 30/May/13 ] |
|
I uploaded the vmcore to ftp.whamcloud.com/uploads/ |
| Comment by James A Simmons [ 30/May/13 ] |
|
Andreas the lfs df -i output was pretty much what you seen below. The lfs df results were basically the same. In the last test shot we saw issues with 4 ost becoming very full but this was not the case this time. /lustre/routed1-OST0130_UUID 3872000 86598 3785402 2% /lustre/routed1[OST:304] Talking with Oleg he didn't believe it was due to grants. Each OST is 250GB in size. I also expected the real issue is happening on the OSS rather than the MDS but the admins only gathered a vmcore from the MDS. |
| Comment by Alex Zhuravlev [ 31/May/13 ] |
|
I agree that osp_pre_update_status() should not be setting the status to 0 first. though I think that check to limit time we spent in osp_precreate_reserve() does not depend on the status, it's a hardlimit. and in the worst case when osp_precreate_ready_condition() missed -ENOSPC due to the race we still should leave osp_pre_update_status() up on expiration? notice the expiration point is not reset in the loop. |
| Comment by Zhenyu Xu [ 31/May/13 ] |
|
James, Do you have OSS debug log? Want to check the reason why it reports ENOSPC while it seems there are plenty. |
| Comment by James A Simmons [ 31/May/13 ] |
|
The only OSS logs I have are like the one I posted here. |
| Comment by Andreas Dilger [ 03/Jun/13 ] |
|
James, how many clients were mounting this filesystem? If each OST is 250GB, and each client gets a 32MB grant, that means 32 clients/GB of free space, so 8000 clients would essentially pin all of the available space on each client. I see something a bit strange in the code that might be causing a problem here: static int ofd_statfs(...) { osfs->os_bavail -= min_t(obd_size, osfs->os_bavail, ((ofd->ofd_tot_dirty + ofd->ofd_tot_pending + osfs->os_bsize - 1) >> ofd->ofd_blockbits)); : : /* The QoS code on the MDS does not care about space reserved for * precreate, so take it out. */ if (exp_connect_flags(exp) & OBD_CONNECT_MDS) { struct filter_export_data *fed; fed = &obd->obd_self_export->exp_filter_data; osfs->os_bavail -= min_t(obd_size, osfs->os_bavail, fed->fed_grant >> ofd->ofd_blockbits); } This is subtracting the granted space from the available space returned to the MDS, but I think it should be adding the granted space back into os_bavail so that the MDS does not consider the grant space as "used". Otherwise, if the clients have reserved a lot of space on the OSTs they may not actually get to use it because the MDS will never allocate space there. A secondary issue is that there is no coordination between the space granted to a specific client and the objects that the MDS allocates to that client, which would become more important as the free space is running out. There could be some kind of (IMHO complex) coordination here between the MDS and clients/OSTs, but I think it would be easier if we just got the grant shrinking code to work again, as there is no guarantee that (a) clients doing IO will have any grant at all, and (b) the clients have grant on the OSTs for which they have been asked to write on. Returning unused grant to the OSTs as the free space shrinks is the best way to ensure that there is some left for the clients actually doing IO. |
| Comment by Johann Lombardi (Inactive) [ 04/Jun/13 ] |
This is actually only subtracting the space reserved for pre-creation for which we use the self export.
I think there is nothing to add back, since only tot_dirty and tot_pending are taken into account here. Please note that tot_granted is not withdrawn anywhere.
I would agree if we were taking tot_granted out in statfs reply, however i don't think this is the case.
I am all for resurrecting grant shrinking, although i haven't had the opportunity to do it yet by lack of time. In fact, we might as well just disconnect (and therefore release grants) from OSTs when idle and when the replay list is empty. We could then reconnect on demand. IMHO, such a scheme would have other benefits: less clients to wait for during recovery and less PING requests. As for the original problem, it seems that precreate requests fails with ENOSPC on the OST: May 29 09:00:06 widow-oss11a4 kernel: [79146.403713] LustreError: 25344:0:(ofd_obd.c:1338:ofd_create()) routed1-OST016b: unable to precreate: rc = -28 James, could you please dump the grant state on the OST by running "lctl get_param obdfilter..tot obdfilter..grant"? |
| Comment by Johann Lombardi (Inactive) [ 04/Jun/13 ] |
|
While running some tests locally, i found out that the space reserved for precreation always decreases, eventually reaches 0 and stays there. It seems that we exit from ofd_grant() at line 641: 637 /* align grant on block size */ 638 grant &= ~((1ULL << ofd->ofd_blockbits) - 1); 639 640 if (!grant) 641 RETURN(0); I think there are two issues:
I will provide a patch. |
| Comment by James A Simmons [ 06/Jun/13 ] |
|
I tested your patch at small scale and it works fine. I like to keep this ticket open until our next test shot to ensure this addresses the problem. Andreas point out other issues as well. |
| Comment by Peter Jones [ 06/Jun/13 ] |
|
ok James. Do you have a timeframe for your next testshot yet? |
| Comment by James A Simmons [ 07/Jun/13 ] |
|
Looks like the end of July for our next test shot. I will see if I can duplicate it at a smaller scale in the mean time. |
| Comment by Peter Jones [ 03/Sep/13 ] |
|
James This patch landed has landed both for 2.4.1 and 2.5. Has the issue reproduced in any test runs featuring this patch? If not, then perhaps we can close the ticket and reopen if it ever does reappear... Peter |
| Comment by James A Simmons [ 03/Sep/13 ] |
|
I haven't seen this bug in some time. You can close it. |
| Comment by Peter Jones [ 03/Sep/13 ] |
|
Thanks James! |