[LU-436] client refused reconnection, still busy with 1 active RPCs Created: 20/Jun/11  Updated: 04/Jun/12  Resolved: 04/Jun/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.6
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Joe Mervini Assignee: Lai Siyao
Resolution: Fixed Votes: 0
Labels: None
Environment:

Lustre 1.8.3 on LLNL chaos release 1.3.4 (2.6.18-93.2redsky_chaos). Redsky 2QoS torus IB network cluster, software raid on Oracle J4400 JBODs - RAID6 8+2 w/external journal & bitmap


Severity: 3
Epic: hang
Rank (Obsolete): 10072

 Description   

We are intermittently seeing problems with our scratch file system where any of 24 OSS nodes gets congested and essentially makes the file system unusable. We are trying to nail down the rogue user code or codes that seem to trigger it but we believe the cause is a large number of small reads or writes to a OST.

Looking at dmesgs we see a lot of "...refused reconnection, still busy with 1 active RPCs" and the load on the system goes through the roof typically with load averages greater than 400-500. Trying to do some forensics we parsed the nodes that are reported in dmesg, went to those nodes and tried doing a lsof on the file system which basically hanged. Thinking these were good candidates we powered them down but it did not change any of the server conditions. As in the past, our only resolution was to cycle the OSS which cleared everything.

I was looking through bugs and it seems like it is very similar to LU-7 that Chris reported in November with the exception that in this case we are not going through a router.

I guess what I am looking for or hoping for is some type if diagnostic that can help determine the source of the congestion vs. the sledge hammer approach of rebooting the server and causing a wider disruption.

I realize this issue is very general but I just wanted to get a dialog going. I have plenty of log data and lustre dumps if that would be helpful.



 Comments   
Comment by Peter Jones [ 20/Jun/11 ]

Lai

Could you please look into this one?

Thanks

Peter

Comment by Marek Magrys [ 21/Jun/11 ]

Hello,

We also have observed a similar problem some time ago, so I can confirm that such problem exists. But it's not easy reproducable, probably one of our users might cause this, but it's hard to identify him, at least for now.

Marek

Comment by Lai Siyao [ 22/Jun/11 ]

Hi Joe,

Could you checkout bz22423? Could you verify the fix for 1.8.3 included in your code?

IMHO with that fix, OSS won't be throttled with reconnect, though the client may still be evicted wrongly due to LU-7.

  • Lai
Comment by Joe Mervini [ 25/Jun/11 ]

I can confirm that the 1.8.3 patch reference in bz22423 has been applied to our build of lustre.

We had the problem reappear 3 times yesterday which hung the file system and required server reboots to clear. We believe we have narrowed the cause down to 2 different codes and have had the users move there working directories to another lustre file system that is not using software raid.

In talking with one of the users, he characterized his IO as basically 1000 threads writing 512k chunks of data to a file. In my mind that, coupled with the overhead of software raid, could possibly cause the overload on the server. Does that sound reasonable? Regardless of this IO pattern I think that lustre should be able to deal with this type of event more gracefully.

With regard to the file system, we are running lustre with pretty much default settings (i.e., we haven't done any real tuning of the file system beyond best practices because the IO patterns on our clusters vary widely). Our default max threads on our OSS servers is 512. In the past it was suggested that we reduce that number before it was determined the problems were bugs that were resolved in patches so we have never made that boot time adjustment. But if there is a way to minimize the potential for file system hangs via boot/runtime adjustments, a slower file system is better than an unusable one. I would just need some guidance on which tunables to adjust and how they should be set.

Comment by Andreas Dilger [ 27/Jun/11 ]

I recall several times during testing of Snowbird OSS nodes that the optimum thread count was around 32 per OST, though it isn't possible to limit the threads to be accessing on a single OST if the striping is imbalanced.

Comment by Cliff White (Inactive) [ 17/Apr/12 ]

Do we need anything further on this bug, or can this issue be closed?

Comment by Joe Mervini [ 17/Apr/12 ]

Cliff - yes we should close this. We are running 1.8.5 on these particular file systems and even though we continue to see load issues, we have plans to decommission our software raid lustre file systems in the near future.

Generated at Sat Feb 10 01:07:01 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.