[LU-3094] Sanity test_132: @@@@@@ FAIL: some glimpse RPC is expected and silent errors during mount Created: 03/Apr/13  Updated: 27/Aug/13  Resolved: 27/Aug/13

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.4.0
Fix Version/s: None

Type: Bug Priority: Minor
Reporter: Keith Mannthey (Inactive) Assignee: Niu Yawei (Inactive)
Resolution: Not a Bug Votes: 0
Labels: None
Environment:

The review queue in Maloo


Issue Links:
Duplicate
duplicates LU-2902 sanity test_156: NOT IN CACHE: before... Resolved
Severity: 3
Rank (Obsolete): 7514

 Description   

Lustre Errors are reported in Sanity Test_132.

I believe they are related to later errors that cannot read some /proc stats from the OST. LU-2902 (Sanity test 156) and LU-2735 (Sanity test 151) and example of the later issues.

In general Test_132 remounts the OST and sometimes all it not well during the mount.

Whenever I find missing data from the /proc filesystem I see Lustre Errors during test 132. The test does not allays fail 132 but there allways errors during the remount.

I find the 132 issues by searching for Sanity 156/151 and looking for FAIL with " NOT IN CACHE: before: , after: " (the Null /proc reads)

So what errors are seen?

In the Sanity test 132 OST dmesg you see things like:

Lustre: DEBUG MARKER: mkdir -p /mnt/ost3; mount -t lustre   		                   /dev/lvm-OSS/P3 /mnt/ost3
LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: 
Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/lustre/tests:/usr/lib64/lustre/u
Lustre: DEBUG MARKER: e2label /dev/lvm-OSS/P3 2>/dev/null
Lustre: lustre-OST0001: deleting orphan objects from 0x0:6257 to 6304
LustreError: 8844:0:(ldlm_resource.c:1161:ldlm_resource_get()) lvbo_init failed for resource 6272: rc -2
LustreError: 8844:0:(ldlm_resource.c:1161:ldlm_resource_get()) Skipped 244 previous similar messages
Lustre: DEBUG MARKER: mkdir -p /mnt/ost4
Lustre: DEBUG MARKER: test -b /dev/lvm-OSS/P5
Lustre: lustre-OST0001: deleting orphan objects from 0x0:6257 to 6304
Lustre: Skipped 2 previous similar messages
LustreError: 10424:0:(ldlm_resource.c:1161:ldlm_resource_get()) lvbo_init failed for resource 5376: rc -2
LustreError: 10424:0:(ldlm_resource.c:1161:ldlm_resource_get()) Skipped 22 previous similar messages
Lustre: DEBUG MARKER: mkdir -p /mnt/ost5; mount -t lustre   		                   /dev/lvm-OSS/P5 /mnt/ost5
LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. quota=on. Opts: 
LustreError: 137-5: UUID 'lustre-OST0005_UUID' is not available for connect (no target)
LustreError: 137-5: UUID 'lustre-OST0006_UUID' is not available for connect (no target)

The Above strikes me a quite serious and it complains about finding the underlying storage target.

Lustre: DEBUG MARKER: mkdir -p /mnt/ost3; mount -t lustre   		                   /dev/lvm-OSS/P3 /mnt/ost3
LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: 
Lustre: 5288:0:(client.c:1868:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1364764845/real 1364764845]  req@ffff88030461c800 x1431056565766172/t0(0) o38->lustre-MDT0000-lwp-OST0001@192.168.4.20@o2ib:12/10 lens 400/544 e 0 to 1 dl 1364764855 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: 5288:0:(client.c:1868:ptlrpc_expire_one_request()) Skipped 3 previous similar messages
LustreError: 11293:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880328c36000 x1431056565766204/t0(0) o101->MGC192.168.4.20@o2ib@192.168.4.20@o2ib:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
LustreError: 11293:0:(client.c:1052:ptlrpc_import_delay_req()) Skipped 3 previous similar messages
LustreError: 12040:0:(obd_mount.c:1715:server_register_target()) Cannot talk to the MGS: -5, not fatal

In general the remount caused by 132 is quite messy in dmesg but it does not always seem to be the same errors. There seem to be communication issues. When 132 has these Lustre errors several other tests will have issues.

Most of the time when the mount has errors 132 does not fail but sanity tests 133, 151, and 156 all fail together.

I opened this as a blocker as I think this may be a root cause for several issues and deserves a proper evaluation.

The runs below are recent example of the issues and there are plenty more if you search maloo:
https://maloo.whamcloud.com/test_sets/786cf564-9b82-11e2-bd87-52540035b04c
https://maloo.whamcloud.com/test_sets/636517d2-98f3-11e2-af89-52540035b04c
https://maloo.whamcloud.com/test_sets/016cbfcc-9816-11e2-879d-52540035b04c



 Comments   
Comment by Keith Mannthey (Inactive) [ 03/Apr/13 ]

I am running a 132 133 151 156 test loop on my local vms for overnight testing.

Comment by Oleg Drokin [ 03/Apr/13 ]

So, the objects being deleted during orphan removal seems to be accessed right afterwards (lvbo_init failures) which should not be possible for real orphans. Did we just delete referenced objects?
This is potentially related to woes in LU-2657

Comment by Keith Mannthey (Inactive) [ 03/Apr/13 ]

In my local testing I see most of these error occurring without the loss of the /proc entries.

Perhaps the errors are not "ERRORS".

Comment by Peter Jones [ 04/Apr/13 ]

Niu

Could you please look into this one?

Thanks

Peter

Comment by Niu Yawei (Inactive) [ 08/Apr/13 ]

Those lvbo_init failed with -ENOENT errors should come from the orphan cleanup, when OST trying to cleanup orphan, it founds those objects are not existing. I didn't see why the orphans have been cleared, but it should not cause the test_132 or subsequent 151/156 failure.

Comment by Keith Mannthey (Inactive) [ 08/Apr/13 ]

Yes it is not clear the errors seen in this test are the root cause of the failing of the /proc entries.

I can get errors in 132 without the later errors so there is not a direct correlation.

Comment by Keith Mannthey (Inactive) [ 30/May/13 ]

LU-2979 was the root cause of the real issue that I was hunting.

Generated at Sat Feb 10 01:30:54 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.