Details
Description
During some tests I notice the following debug message on the console, and I suspect it is a sign of a resource leak in some code path that should be cleaned up.
LustreError: 22089:0:(ldlm_resource.c:761:ldlm_resource_complain()) Namespace MGC192.168.20.154@tcp resource refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 22089:0:(ldlm_resource.c:767:ldlm_resource_complain()) Resource: ffff880058917200 (126883877578100/0/0/0) (rc: 0)
On my system, the LDLM resource ID is always the same - 126883877578100 = 0x736674736574, which happens to be the ASCII (in reverse order) for the Lustre fsname of the filesystem being tested, "testfs".
I don't know when the problem started for sure, but it is in my /var/log/messages file as far back as I have records of Lustre testing on this machine, 2012/09/02.
The tests that report this include:
- replay-single:
- test_0c
- test_10
- test_13
- test_14
- test_15
- test_17
- test_19
- test_22
- test_24
- test_28
- test_53b
- test_59
- replay-dual
- test_5
- test_6
- test_9
- insanity
- test_0
Note that in my older runs (2012-09-10) the list of tests is very similar, but not exactly the same. I don't know if this indicates that the failure is due to a race condition (so it only hits on a percentage of tests), or if the leak happens differently in the newer code.
Attachments
Issue Links
- is related to
-
LU-8792 Interop - master<->2.8 :sanity-hsm test_107: hung while umount MDT
- Closed