[LU-8696] "ls" hangs on a particular directory on production system Created: 12/Oct/16 Updated: 13/Nov/17 Resolved: 09/Mar/17 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | Lustre 2.5.3, Lustre 2.8.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical |
| Reporter: | Feiyi Wang | Assignee: | nasf (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Environment: |
OLCF Atlas production system: clients running 2.8.0+ (with patches), server running 2.5.5+ (with patches) |
||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Severity: | 3 | ||||||||
| Rank (Obsolete): | 9223372036854775807 | ||||||||
| Description |
|
On atlas2 file system, we have a particular directory, any operations such as "ls" or "stat" will completely hang the process. This incurs no OS error or Lustre error from the client side. On server side, we did observe OI scrub message a few times, which may suggest there is some MDS data inconsistency, and it is "trying" to do the fix but no avail. We can't correlate the two yet. Ops teams have collected traces on the client side by: mount -t lustre 10.36.226.77@o2ib:/atlas2 /lustre/atlas2 -o rw,flock,nosuid,nodev Step2: cd /lustre/atlas2/path/to/offending_directory/ Step1: lctl dk > /dev/null the log is attached. |
| Comments |
| Comment by Feiyi Wang [ 12/Oct/16 ] |
|
from client side |
| Comment by Joseph Gmitter (Inactive) [ 14/Oct/16 ] |
|
Hi Fan Yong, Could you please have a look at this issue? Thanks. |
| Comment by nasf (Inactive) [ 23/Oct/16 ] |
... 00000002:00100000:1.0:1476224347.810367:0:14314:0:(mdc_locks.c:612:mdc_finish_enqueue()) @@@ op: 8 disposition: b, status: -115 req@ffff883f9d89a0c0 x1547933307679040/t0(0) o101->atlas2-MDT0000-mdc-ffff883fe0706c00@10.36.226.121@o2ib:12/10 lens 696/536 e 0 to 0 dl 1476224665 ref 1 fl Interpret:R/0/0 rc 301/301 00010000:00000001:1.0:1476224347.810373:0:14314:0:(ldlm_lock.c:598:__ldlm_handle2lock()) Process entered 00000020:00000001:1.0:1476224347.810373:0:14314:0:(lustre_handles.c:160:class_handle2object()) Process entered 00000020:00000001:1.0:1476224347.810374:0:14314:0:(lustre_handles.c:183:class_handle2object()) Process leaving (rc=0 : 0 : 0) 00010000:00000001:1.0:1476224347.810376:0:14314:0:(ldlm_lock.c:604:__ldlm_handle2lock()) Process leaving (rc=0 : 0 : 0) 00000002:00000001:1.0:1476224347.810377:0:14314:0:(mdc_locks.c:741:mdc_finish_enqueue()) Process leaving (rc=0 : 0 : 0) 00000002:00000001:1.0:1476224347.810378:0:14314:0:(mdc_locks.c:922:mdc_finish_intent_lock()) Process entered 00000002:00000001:1.0:1476224347.810379:0:14314:0:(mdc_locks.c:946:mdc_finish_intent_lock()) Process leaving (rc=18446744073709551501 : -115 : ffffffffffffff8d) 00000002:00000001:1.0:1476224347.810380:0:14314:0:(mdc_locks.c:1195:mdc_intent_getattr_async_interpret()) Process leaving 00000080:00000001:1.0:1476224347.810382:0:14314:0:(statahead.c:663:ll_statahead_interpret()) Process entered 00000080:00400000:1.0:1476224347.810382:0:14314:0:(statahead.c:675:ll_statahead_interpret()) sa_entry DESIGN_3D_MAR2013 rc -115 00000080:00000001:1.0:1476224347.810384:0:14314:0:(dcache.c:269:ll_intent_release()) Process entered 00000080:00000040:1.0:1476224347.810385:0:14314:0:(dcache.c:271:ll_intent_release()) intent ffff883f928a0d38 released 00000080:00000001:1.0:1476224347.810386:0:14314:0:(dcache.c:282:ll_intent_release()) Process leaving 00000080:00000010:1.0:1476224347.810387:0:14314:0:(statahead.c:680:ll_statahead_interpret()) kfreed 'minfo': 480 at ffff883f928a0c00. 00000080:00000001:1.0:1476224347.810391:0:14314:0:(statahead.c:709:ll_statahead_interpret()) Process leaving (rc=18446744073709551501 : -115 : ffffffffffffff8d) As shown in the client side log, when the client wants to stat the object DESIGN_3D_MAR2013, it hits -115 failure. That means the MDT is doing OI scrub, and the target OI mapping for DESIGN_3D_MAR2013 is not ready yet. So the client has to retry again and again until related OI mapping is rebuilt, looks like hung there. As for what caused the OI scrub, we need the MDS side kernel debug log (-1 level). So there should be some OI inconsistency, I would suggest you to run OI scrub on the MDT with kernel debug log (please open lfsck debug: "lctl set_param debug=+lfsck") collected. Please show me the output of "lctl get_param -n osd-ldiskfs.$fsname-MDT0000.oi_scrub" before and after the OI scrub. Thanks! |
| Comment by Dustin Leverman [ 25/Oct/16 ] |
|
Nasf, [root@atlas2-mds1 mdt]# lctl get_param -n osd-ldiskfs.atlas2-MDT0000.oi_scrub Thanks, |
| Comment by nasf (Inactive) [ 25/Oct/16 ] |
|
It is strange that the OI scrub has not found inconsistency. It should be some OI scrub issue. |
| Comment by nasf (Inactive) [ 27/Oct/16 ] |
|
Another possible reason is that the FID-in-dirent is corrupted, that can explain why the OI scrub was triggered but no inconsistent OI mapping was found. It can be verified via "lctl set_param fail_loc=0x1505" on the MDS and try "ls" again after the setting. If it still hung there, then it is NOT the case; otherwise, we found the reason. |
| Comment by nasf (Inactive) [ 26/Nov/16 ] |
|
Dustin, do you have more logs or any feedback about trying "lctl set_param fail_loc=0x1505" on the MDS? Thanks! |
| Comment by nasf (Inactive) [ 24/Jan/17 ] |
|
Ping. |
| Comment by Dustin Leverman [ 24/Jan/17 ] |
|
Sorry for the delays Nasf! We will be having an outage on Feb. 07 to test lustre-2.8 servers and will hopefully leave it in production. After this outage we can run an online lfsck to see if this probably gets resolved. |
| Comment by James A Simmons [ 28/Feb/17 ] |
|
The command "lctl set_param fail_loc=0x1505" was run on the MDS and it fixed the problem. Thanks nasf |
| Comment by nasf (Inactive) [ 28/Feb/17 ] |
|
Sorry, some misguide, "lctl set_param fail_loc=0x1505" is used for bypass FID-in-dirent when it is broken. So if your system hang before, but works well with "lctl set_param fail_loc=0x1505", that means it is quite possible that some FID-in-dirent is broken. Under such case, you need to run namespace LFSCK (with "lctl set_param fail_loc=0") to repair the FID-in-dirent. Otherwise, bypass FID-in-dirent will slowdown the lookup() performance. |
| Comment by James A Simmons [ 28/Feb/17 ] |
|
Since this is the case we will leave this open until the lfsck run. |
| Comment by James A Simmons [ 09/Mar/17 ] |
|
lfsck run seems to have fixed this. |