Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-436

client refused reconnection, still busy with 1 active RPCs

Details

    • Bug
    • Resolution: Fixed
    • Major
    • None
    • Lustre 1.8.6
    • None
    • Lustre 1.8.3 on LLNL chaos release 1.3.4 (2.6.18-93.2redsky_chaos). Redsky 2QoS torus IB network cluster, software raid on Oracle J4400 JBODs - RAID6 8+2 w/external journal & bitmap
    • 3
    • 10072

    Description

      We are intermittently seeing problems with our scratch file system where any of 24 OSS nodes gets congested and essentially makes the file system unusable. We are trying to nail down the rogue user code or codes that seem to trigger it but we believe the cause is a large number of small reads or writes to a OST.

      Looking at dmesgs we see a lot of "...refused reconnection, still busy with 1 active RPCs" and the load on the system goes through the roof typically with load averages greater than 400-500. Trying to do some forensics we parsed the nodes that are reported in dmesg, went to those nodes and tried doing a lsof on the file system which basically hanged. Thinking these were good candidates we powered them down but it did not change any of the server conditions. As in the past, our only resolution was to cycle the OSS which cleared everything.

      I was looking through bugs and it seems like it is very similar to LU-7 that Chris reported in November with the exception that in this case we are not going through a router.

      I guess what I am looking for or hoping for is some type if diagnostic that can help determine the source of the congestion vs. the sledge hammer approach of rebooting the server and causing a wider disruption.

      I realize this issue is very general but I just wanted to get a dialog going. I have plenty of log data and lustre dumps if that would be helpful.

      Attachments

        Issue Links

          Activity

            [LU-436] client refused reconnection, still busy with 1 active RPCs
            jamervi Joe Mervini added a comment -

            Cliff - yes we should close this. We are running 1.8.5 on these particular file systems and even though we continue to see load issues, we have plans to decommission our software raid lustre file systems in the near future.

            jamervi Joe Mervini added a comment - Cliff - yes we should close this. We are running 1.8.5 on these particular file systems and even though we continue to see load issues, we have plans to decommission our software raid lustre file systems in the near future.

            Do we need anything further on this bug, or can this issue be closed?

            cliffw Cliff White (Inactive) added a comment - Do we need anything further on this bug, or can this issue be closed?

            I recall several times during testing of Snowbird OSS nodes that the optimum thread count was around 32 per OST, though it isn't possible to limit the threads to be accessing on a single OST if the striping is imbalanced.

            adilger Andreas Dilger added a comment - I recall several times during testing of Snowbird OSS nodes that the optimum thread count was around 32 per OST, though it isn't possible to limit the threads to be accessing on a single OST if the striping is imbalanced.
            jamervi Joe Mervini added a comment -

            I can confirm that the 1.8.3 patch reference in bz22423 has been applied to our build of lustre.

            We had the problem reappear 3 times yesterday which hung the file system and required server reboots to clear. We believe we have narrowed the cause down to 2 different codes and have had the users move there working directories to another lustre file system that is not using software raid.

            In talking with one of the users, he characterized his IO as basically 1000 threads writing 512k chunks of data to a file. In my mind that, coupled with the overhead of software raid, could possibly cause the overload on the server. Does that sound reasonable? Regardless of this IO pattern I think that lustre should be able to deal with this type of event more gracefully.

            With regard to the file system, we are running lustre with pretty much default settings (i.e., we haven't done any real tuning of the file system beyond best practices because the IO patterns on our clusters vary widely). Our default max threads on our OSS servers is 512. In the past it was suggested that we reduce that number before it was determined the problems were bugs that were resolved in patches so we have never made that boot time adjustment. But if there is a way to minimize the potential for file system hangs via boot/runtime adjustments, a slower file system is better than an unusable one. I would just need some guidance on which tunables to adjust and how they should be set.

            jamervi Joe Mervini added a comment - I can confirm that the 1.8.3 patch reference in bz22423 has been applied to our build of lustre. We had the problem reappear 3 times yesterday which hung the file system and required server reboots to clear. We believe we have narrowed the cause down to 2 different codes and have had the users move there working directories to another lustre file system that is not using software raid. In talking with one of the users, he characterized his IO as basically 1000 threads writing 512k chunks of data to a file. In my mind that, coupled with the overhead of software raid, could possibly cause the overload on the server. Does that sound reasonable? Regardless of this IO pattern I think that lustre should be able to deal with this type of event more gracefully. With regard to the file system, we are running lustre with pretty much default settings (i.e., we haven't done any real tuning of the file system beyond best practices because the IO patterns on our clusters vary widely). Our default max threads on our OSS servers is 512. In the past it was suggested that we reduce that number before it was determined the problems were bugs that were resolved in patches so we have never made that boot time adjustment. But if there is a way to minimize the potential for file system hangs via boot/runtime adjustments, a slower file system is better than an unusable one. I would just need some guidance on which tunables to adjust and how they should be set.
            laisiyao Lai Siyao added a comment -

            Hi Joe,

            Could you checkout bz22423? Could you verify the fix for 1.8.3 included in your code?

            IMHO with that fix, OSS won't be throttled with reconnect, though the client may still be evicted wrongly due to LU-7.

            • Lai
            laisiyao Lai Siyao added a comment - Hi Joe, Could you checkout bz22423 ? Could you verify the fix for 1.8.3 included in your code? IMHO with that fix, OSS won't be throttled with reconnect, though the client may still be evicted wrongly due to LU-7 . Lai
            m.magrys Marek Magrys added a comment -

            Hello,

            We also have observed a similar problem some time ago, so I can confirm that such problem exists. But it's not easy reproducable, probably one of our users might cause this, but it's hard to identify him, at least for now.

            Marek

            m.magrys Marek Magrys added a comment - Hello, We also have observed a similar problem some time ago, so I can confirm that such problem exists. But it's not easy reproducable, probably one of our users might cause this, but it's hard to identify him, at least for now. Marek
            pjones Peter Jones added a comment -

            Lai

            Could you please look into this one?

            Thanks

            Peter

            pjones Peter Jones added a comment - Lai Could you please look into this one? Thanks Peter

            People

              laisiyao Lai Siyao
              jamervi Joe Mervini
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: