[LU-416] Many processes hung consuming a lot of CPU in Lustre-Client page-cache lookups Created: 15/Jun/11 Updated: 05/Aug/11 Resolved: 30/Jul/11 |
|
| Status: | Resolved |
| Project: | Lustre |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Sebastien Buisson (Inactive) | Assignee: | Jinshan Xiong (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
| Severity: | 3 |
| Bugzilla ID: | 23,398 |
| Rank (Obsolete): | 8547 |
| Description |
|
Hi, At CEA they see quite often a problem on Lustre clients where processes are stuck consuming a lot of CPU time in Lustre layers. Unfortunately the only way to really fix this for now is to reboot the impacted nodes (after waiting for them for several hours), since involved processes are not killable. Crash dump analysis shows processes stuck with the following stack traces (crash dumps can be analyzed only on customer site): ========================================================= and/or In attachment you will find 3 files:
There are also "ll_imp_inval" threads stuck due to this problem, leaving OSCs in "IN"active state during a too long time finally causing time-outs and EIOs for client processes. It seems to be a race around the OSC object pages lock/radix-tree when concurrent access occur (OOM, flush, invalidation, concurrent I/O). This problem seems to occur when, on the same In order to cope with production imperatives, CEA has setup a work-around that consists in freeing pagecache with "echo 1 > /proc/sys/vm/drop_caches". Doing so, clients will be able to reconnect. On the contrary, and it is interesting to note, clearing the LRU with "lctl set_param ldlm.namespaces.*.lru_size=clear" will hang the node! Does this issue sound familiar? Sebastien. |
| Comments |
| Comment by Peter Jones [ 15/Jun/11 ] |
|
Oleg Could you please advise on this one? Thanks Peter |
| Comment by Jinshan Xiong (Inactive) [ 15/Jun/11 ] |
|
It looks like those processes are busy discarding pages. During this process, they all need to grab a global spin lock to do this. So if possible, it would be interesting to verify it by running oprofile. |
| Comment by Sebastien Buisson (Inactive) [ 16/Jun/11 ] |
|
CEA confirms that processes are waiting for ages on this global spin lock. |
| Comment by Jinshan Xiong (Inactive) [ 16/Jun/11 ] |
|
what kernel r you using? If you;re using rhel6, we can use lockless radix tree to fix this problem; otherwise, I will try to work out a workaround to mitigate it. |
| Comment by Peter Jones [ 16/Jun/11 ] |
|
Jinshan Yes they are using RHEL6. Do you need anything more precise than that? Peter |
| Comment by Jinshan Xiong (Inactive) [ 16/Jun/11 ] |
|
Hi Seba, If you have a testing system, you may try patch at http://review.whamcloud.com/#change,911. That patch is for I'm working on using RCU radix tree to solve the problem. Jinshan |
| Comment by Bruno Faccini (Inactive) [ 16/Jun/11 ] |
|
I would like to precise/correct Sebastien's "CEA confirms that processes are waiting for ages on this global spin lock." comment. In fact, all involved threads (as the spin-lock counter evolving indicates!) one at a time acquire the spin-lock, then go thru the radix-tree and last release the spin-lock. And this pseudo-hang situation could be aggravated by the race on the spin-lock, the radix-tree search, and may be also "false/unnecessary" trips for the same pages ... |
| Comment by Jinshan Xiong (Inactive) [ 17/Jun/11 ] |
|
Indeed. From the dmesg in the attachment, though only a few CPUs(cpu 3, 4 and 10) were busy discarding pages, they were stuck at grabbing spin_lock. This is why I'm thinking the contention on object's radix tree lock would be a problem. Another thing I'm quite sure is that there must be tons of pages caching at the client side(This is because there is no cache limit as what we did in b18), so that the client had to take a lot of time to drain them. It seems that there is a lot of work to use lockless pagecache in clio as linux kernel does. Maybe we can limit # of caching pages at the client side so that a fast recovery is possible after an OST runs into problem. Also, it may be interesting to see what's going on at the OST side. The client lost connections to a couple of OSTs in a short time. |
| Comment by Alexandre Louvet [ 17/Jun/11 ] |
|
About cache size at client side, I have trace where the client 'only' have 3GB of cached data (not that much). Regarding time spend in various kernel code, we got time to run oprofile. 96% was spend into cl_page_gang_lookup, 0.7% in radix_tree_gang_lookup. |
| Comment by Jinshan Xiong (Inactive) [ 17/Jun/11 ] |
|
Can you please help me get those data while the client node is hung? lctl get_param osc.*.rpc_stats Also, if it's possible, I'd like to know all of processes state on the node(echo t > /proc/sysrq-trigger). I'd like to see those output time by time(man watch(1)) so I can know what the processes are doing. I realize it will be a lot of time to implement lockless pagecache, so I'd like to work out a workaround patch. Thank you so much. |
| Comment by Sebastien Buisson (Inactive) [ 20/Jun/11 ] |
|
Hi Jay, I have requested the data you are asking for to our on-site Support team. Sebastien. |
| Comment by Jinshan Xiong (Inactive) [ 21/Jun/11 ] |
|
It looks like there is an infinite loop problem in cl_lock_page_out(). I'm going to work out a patch to fix it. |
| Comment by Sebastien Buisson (Inactive) [ 22/Jun/11 ] |
|
OK thank you Jinshan, we are looking forward to your patch. Cheers, |
| Comment by Jinshan Xiong (Inactive) [ 22/Jun/11 ] |
|
Hi Sebastien, I'm sorry, I still don't figure out the root cause of this issue, and there is a similar stack trace on It will be great if I can get those data, because I'd like to know if the system is in a livelock state or keep going. Anyway, it will be all right if we can reproduce it in our lab. Thanks, |
| Comment by Jinshan Xiong (Inactive) [ 22/Jun/11 ] |
|
Can you please try patch at http://review.whamcloud.com/#change,911 if you have a test system? |
| Comment by Alexandre Louvet [ 23/Jun/11 ] |
|
Jinshan, Waiting enough give time to the system to make progress and complete, but it takes hours (even days). It doesn't look to be a live lock (at least some complete). On Jun 16th, I got some numbers, specially the number of locks assigned to the 'slow' client. Only 8 OSC had locks, and none ot them had more than 78 locks. The amount of buffer cache at this time was around 3GB. Alex. |
| Comment by Jinshan Xiong (Inactive) [ 23/Jun/11 ] |
|
So when this problem occurs, it takes too much time for the osc to write out all caching pages. This may be due to the deficiency in the implementation of cl_page_gang_lookup(), definitely it can worsen contention of ->coh_page_guard and slow things down. |
| Comment by Bruno Faccini (Inactive) [ 04/Jul/11 ] |
|
Just one more comment which may demonstrate the coh_page_guard/coh_tree (ie, respectivelly spin-lock/radix-tree data structures to manage pages on a Client) current ineficiency when dealing with concurent access and with a huge number of pages, "lctl set_param |
| Comment by Jinshan Xiong (Inactive) [ 05/Jul/11 ] |
|
indeed, lru_size=clear will drop all of caching locks at the client side, which has the same effect of echo 1 > drop_caches and evicts a client node. Actually I'm working on this issue at lu-437, can you please try the last patch at http://review.whamcloud.com/#change,911 to see if it works. |
| Comment by Peter Jones [ 05/Aug/11 ] |
|
Bull\CEA confirm that this issue was resolved by the LU394 patch |