Details
-
Bug
-
Resolution: Fixed
-
Major
-
None
-
None
-
Centos 7 VMs on Lustre 2.14
-
3
-
9223372036854775807
Description
Reproducer:
- Activate "tbf gid" policy:
lctl set_param mds.MDS.mdt.nrs_policies="tbf gid" - Register a rule for a group (with a small rate value):
lctl set_param mds.MDS.mdt.nrs_tbf_rule="start eaujames gid={1000} rate=10" - Start doing md oprations with the limited gid on the mdt (multithreaded file creations/deletions)
- When a message is queued inside the policy, changes the policy to tbf:
lctl set_param mds.MDS.mdt.nrs_policies="tbf" - Stop md operations. Lustre consumes 100% on CPU partition where the message is queued:
For our production filesystem, on MDT0001 all cpt were impacted (>100 rpc in queue, load ~300) and on MDT0000 one cpt was impacted (1 rpc in queue, load ~90).
mds.MDS.mdt.nrs_policies= regular_requests: - name: fifo state: started fallback: yes queued: 0 active: 0 - name: crrn state: stopped fallback: no queued: 0 active: 0 - name: tbf state: started fallback: no queued: 1 active: 0 - name: delay state: stopped fallback: no queued: 0 active: 0
When we try to change the policy to fifo, the proccess is block to "stopping" state:
mds.MDS.mdt.nrs_policies= regular_requests: - name: fifo state: started fallback: yes queued: 0 active: 0 - name: crrn state: stopped fallback: no queued: 0 active: 0 - name: tbf state: stopping fallback: no queued: 1 active: 0 - name: delay state: stopped fallback: no queued: 0 active: 0
Analyse:
It seems that when we change tbf policy ("tbf gid" -> "tbf"), old rpc queued inside "tbf gid" became inaccessible to ptlrpc threads.
ptlrpc_wait_event wake up when an rpc is availabled to enqueue. But in that case ptlrpc thread is unable to enqueue the request, so it wake up all the time (causing the cpu load).
00000100:00000001:1.0:1630509978.890060:0:4749:0:(service.c:2029:ptlrpc_server_request_get()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:0.0:1630509978.890060:0:5580:0:(service.c:2008:ptlrpc_server_request_get()) Process entered 00000100:00000001:2.0:1630509978.890061:0:5653:0:(service.c:2029:ptlrpc_server_request_get()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:2.0:1630509978.890061:0:5653:0:(service.c:2248:ptlrpc_server_handle_request()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:1.0:1630509978.890061:0:4749:0:(service.c:2248:ptlrpc_server_handle_request()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:0.0:1630509978.890061:0:5580:0:(service.c:2029:ptlrpc_server_request_get()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:0.0:1630509978.890061:0:5580:0:(service.c:2248:ptlrpc_server_handle_request()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:1.0:1630509978.890062:0:4749:0:(service.c:2244:ptlrpc_server_handle_request()) Process entered 00000100:00000001:1.0:1630509978.890062:0:4749:0:(service.c:2008:ptlrpc_server_request_get()) Process entered 00000100:00000001:2.0:1630509978.890063:0:5653:0:(service.c:2244:ptlrpc_server_handle_request()) Process entered 00000100:00000001:2.0:1630509978.890063:0:5653:0:(service.c:2008:ptlrpc_server_request_get()) Process entered 00000100:00000001:1.0:1630509978.890063:0:4749:0:(service.c:2029:ptlrpc_server_request_get()) Process leaving (rc=0 : 0 : 0) 00000100:00000001:0.0:1630509978.890063:0:5580:0:(service.c:2244:ptlrpc_server_handle_request()) Process entered 00000100:00000001:2.0:1630509978.890064:0:5653:0:(service.c:2029:ptlrpc_server_request_get()) Process leaving (rc=0 : 0 : 0)
On my VM for one mdt thread ptlrpc_server_handle_request() is called with 300kHz frequency (doing nothing).