[LU-8696] "ls" hangs on a particular directory on production system Created: 12/Oct/16  Updated: 13/Nov/17  Resolved: 09/Mar/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.5.3, Lustre 2.8.0
Fix Version/s: None

Type: Bug Priority: Critical
Reporter: Feiyi Wang Assignee: nasf (Inactive)
Resolution: Fixed Votes: 0
Labels: None
Environment:

OLCF Atlas production system: clients running 2.8.0+ (with patches), server running 2.5.5+ (with patches)


Attachments: File atlas2-mds3_ls_for_fprof.out.gz    
Issue Links:
Cloners
is cloned by LU-10237 "ls" hangs on a particular directory Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

On atlas2 file system, we have a particular directory, any operations such as "ls" or "stat" will completely hang the process. This incurs no OS error or Lustre error from the client side. On server side, we did observe OI scrub message a few times, which may suggest there is some MDS data inconsistency, and it is "trying" to do the fix but no avail. We can't correlate the two yet.

Ops teams have collected traces on the client side by:

mount -t lustre 10.36.226.77@o2ib:/atlas2 /lustre/atlas2 -o rw,flock,nosuid,nodev
lctl set_param osc/*/checksums 0
echo β€œall” > /proc/sys/lnet/debug
echo β€œ1024” > /proc/sys/lnet/debug_mb

Step2: cd /lustre/atlas2/path/to/offending_directory/
Step3: ls

Step1: lctl dk > /dev/null
Step4: Wait 30 seconds
Step5: lctl dk > atlas2-mds3_ls_for_fprof.out

the log is attached.



 Comments   
Comment by Feiyi Wang [ 12/Oct/16 ]

from client side

Comment by Joseph Gmitter (Inactive) [ 14/Oct/16 ]

Hi Fan Yong,

Could you please have a look at this issue?

Thanks.
Joe

Comment by nasf (Inactive) [ 23/Oct/16 ]
...
00000002:00100000:1.0:1476224347.810367:0:14314:0:(mdc_locks.c:612:mdc_finish_enqueue()) @@@ op: 8 disposition: b, status: -115  req@ffff883f9d89a0c0 x1547933307679040/t0(0) o101->atlas2-MDT0000-mdc-ffff883fe0706c00@10.36.226.121@o2ib:12/10 lens 696/536 e 0 to 0 dl 1476224665 ref 1 fl Interpret:R/0/0 rc 301/301
00010000:00000001:1.0:1476224347.810373:0:14314:0:(ldlm_lock.c:598:__ldlm_handle2lock()) Process entered
00000020:00000001:1.0:1476224347.810373:0:14314:0:(lustre_handles.c:160:class_handle2object()) Process entered
00000020:00000001:1.0:1476224347.810374:0:14314:0:(lustre_handles.c:183:class_handle2object()) Process leaving (rc=0 : 0 : 0)
00010000:00000001:1.0:1476224347.810376:0:14314:0:(ldlm_lock.c:604:__ldlm_handle2lock()) Process leaving (rc=0 : 0 : 0)
00000002:00000001:1.0:1476224347.810377:0:14314:0:(mdc_locks.c:741:mdc_finish_enqueue()) Process leaving (rc=0 : 0 : 0)
00000002:00000001:1.0:1476224347.810378:0:14314:0:(mdc_locks.c:922:mdc_finish_intent_lock()) Process entered
00000002:00000001:1.0:1476224347.810379:0:14314:0:(mdc_locks.c:946:mdc_finish_intent_lock()) Process leaving (rc=18446744073709551501 : -115 : ffffffffffffff8d)
00000002:00000001:1.0:1476224347.810380:0:14314:0:(mdc_locks.c:1195:mdc_intent_getattr_async_interpret()) Process leaving
00000080:00000001:1.0:1476224347.810382:0:14314:0:(statahead.c:663:ll_statahead_interpret()) Process entered
00000080:00400000:1.0:1476224347.810382:0:14314:0:(statahead.c:675:ll_statahead_interpret()) sa_entry DESIGN_3D_MAR2013 rc -115
00000080:00000001:1.0:1476224347.810384:0:14314:0:(dcache.c:269:ll_intent_release()) Process entered
00000080:00000040:1.0:1476224347.810385:0:14314:0:(dcache.c:271:ll_intent_release()) intent ffff883f928a0d38 released
00000080:00000001:1.0:1476224347.810386:0:14314:0:(dcache.c:282:ll_intent_release()) Process leaving
00000080:00000010:1.0:1476224347.810387:0:14314:0:(statahead.c:680:ll_statahead_interpret()) kfreed 'minfo': 480 at ffff883f928a0c00.
00000080:00000001:1.0:1476224347.810391:0:14314:0:(statahead.c:709:ll_statahead_interpret()) Process leaving (rc=18446744073709551501 : -115 : ffffffffffffff8d)

As shown in the client side log, when the client wants to stat the object DESIGN_3D_MAR2013, it hits -115 failure. That means the MDT is doing OI scrub, and the target OI mapping for DESIGN_3D_MAR2013 is not ready yet. So the client has to retry again and again until related OI mapping is rebuilt, looks like hung there. As for what caused the OI scrub, we need the MDS side kernel debug log (-1 level).

So there should be some OI inconsistency, I would suggest you to run OI scrub on the MDT with kernel debug log (please open lfsck debug: "lctl set_param debug=+lfsck") collected. Please show me the output of "lctl get_param -n osd-ldiskfs.$fsname-MDT0000.oi_scrub" before and after the OI scrub. Thanks!

Comment by Dustin Leverman [ 25/Oct/16 ]

Nasf,
Per Intel's recommendation, we ran an e2fsck during our last test shot to see if the problem gets fixed (despite the OI scrubber messages that we were seeing in the logs). We did find some non-critical issues, but we are still seeing the same hanging behavior with this directory. We have to take a downtime to temporarily upgrade to lustre-2.8 to use a functional LFSCK. I'm not 100% sure when we will get this opportunity, but I will keep it on our radar. For your reference, this is the IO scrub lfs get_param info you were wanting:

[root@atlas2-mds1 mdt]# lctl get_param -n osd-ldiskfs.atlas2-MDT0000.oi_scrub
name: OI_scrub
magic: 0x4c5fd252
oi_files: 64
status: completed
flags:
param:
time_since_last_completed: 559 seconds
time_since_latest_start: 5295 seconds
time_since_last_checkpoint: 559 seconds
latest_start_position: 12
last_checkpoint_position: 1073741825
first_failure_position: N/A
checked: 406401957
updated: 0
failed: 0
prior_updated: 0
noscrub: 192023
igif: 158502
success_count: 1140
run_time: 4736 seconds
average_speed: 85811 objects/sec
real-time_speed: N/A
current_position: N/A
lf_scanned: 0
lf_reparied: 0
lf_failed: 0

Thanks,
Dustin

Comment by nasf (Inactive) [ 25/Oct/16 ]

It is strange that the OI scrub has not found inconsistency. It should be some OI scrub issue.
Do you have the MDS side -1 level Lustre kernel debug logs when the "ls" hung? On the other hand, would you please to use "debugfs" to dump the directory and its sub-items that caused the system hung when "ls"? Thanks!

Comment by nasf (Inactive) [ 27/Oct/16 ]

Another possible reason is that the FID-in-dirent is corrupted, that can explain why the OI scrub was triggered but no inconsistent OI mapping was found. It can be verified via "lctl set_param fail_loc=0x1505" on the MDS and try "ls" again after the setting. If it still hung there, then it is NOT the case; otherwise, we found the reason.

Comment by nasf (Inactive) [ 26/Nov/16 ]

Dustin, do you have more logs or any feedback about trying "lctl set_param fail_loc=0x1505" on the MDS? Thanks!

Comment by nasf (Inactive) [ 24/Jan/17 ]

Ping.

Comment by Dustin Leverman [ 24/Jan/17 ]

Sorry for the delays Nasf! We will be having an outage on Feb. 07 to test lustre-2.8 servers and will hopefully leave it in production. After this outage we can run an online lfsck to see if this probably gets resolved.

Comment by James A Simmons [ 28/Feb/17 ]

The command "lctl set_param fail_loc=0x1505" was run on the MDS and it fixed the problem. Thanks nasf

Comment by nasf (Inactive) [ 28/Feb/17 ]

Sorry, some misguide, "lctl set_param fail_loc=0x1505" is used for bypass FID-in-dirent when it is broken. So if your system hang before, but works well with "lctl set_param fail_loc=0x1505", that means it is quite possible that some FID-in-dirent is broken. Under such case, you need to run namespace LFSCK (with "lctl set_param fail_loc=0") to repair the FID-in-dirent. Otherwise, bypass FID-in-dirent will slowdown the lookup() performance.

Comment by James A Simmons [ 28/Feb/17 ]

Since this is the case we will leave this open until the lfsck run.

Comment by James A Simmons [ 09/Mar/17 ]

lfsck run seems to have fixed this.

Generated at Sat Feb 10 02:19:46 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.