Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12197

incorrect peer nids with discovery enabled

Details

    • Bug
    • Resolution: Duplicate
    • Major
    • None
    • Lustre 2.12.0
    • ARM clients: kernel 4.14.0-115.2.2.el7a.aarch64 MLNX_OFED_LINUX-4.5-1.0.1.0 (OFED-4.5-1.0.1)
      x86 servers: rhel 7.6, same mofed
      2.12.0 no patches
    • 3
    • 9223372036854775807

    Description

      The client side symptom is alternating success/failure of lnet ping to an oss. On the oss we see:

      # lnetctl peer show --nid n1-ib0@o2ib
      peer:
          - primary nid: xxx.xxx.xxx.17@o2ib
            Multi-Rail: True
            peer ni:
              - nid: xxx.xxx.xxx.17@o2ib
                state: NA
              - nid: xxx.xxx.xxx.182@o2ib
                state: NA
      # lnetctl peer show --nid n2-ib0@o2ib
      peer:
          - primary nid: xxx.xxx.xxx.17@o2ib
            Multi-Rail: True
            peer ni:
              - nid: xxx.xxx.xxx.17@o2ib
                state: NA
              - nid: xxx.xxx.xxx.182@o2ib
                state: NA

      where n1 has ipaddr ending in 182, and n2 has ipaddr ending 17.

      The results in logs is lots of timeouts, put NAKs, mount failures and general chaos including plenty of the following message:

      kernel: LustreError: 21309:0:(events.c:450:server_bulk_callback()) event type 3, status -61, desc ffff9c46e0303200 

      The logs lead us to believe there were IB problems, but the fabric was found to be clean and responsive between the affected client nodes and oss servers.

      Planning to turn off discovery going forward. I'll leave a few clients drained for awhile in case there is info you might need.

      fyi, rebooting the client does not change the behavior, rebooting the server clears it. Also manually deleting the incorrect peer nid on the server and re-adding the correct info for the missing peer also clears it.

      Also, clients are running socket direct, but only one IPoIB interface is configured and in use by lnet.

      Attachments

        Issue Links

          Activity

            [LU-12197] incorrect peer nids with discovery enabled
            pjones Peter Jones added a comment -

            The fix is included in 2.12.3 which is in release testing ATM

            pjones Peter Jones added a comment - The fix is included in 2.12.3 which is in release testing ATM
            ashehata Amir Shehata (Inactive) added a comment - - edited

            Yes. I added it to the list of patches we ought to port to b2_12 here LU-12666

            ashehata Amir Shehata (Inactive) added a comment - - edited Yes. I added it to the list of patches we ought to port to b2_12 here LU-12666
            pjones Peter Jones added a comment -

            Amir

            Should we look to include the fix for LU-11478 onto b2_12 as a first step?

            Peter

            pjones Peter Jones added a comment - Amir Should we look to include the fix for LU-11478 onto b2_12 as a first step? Peter

            I talked to Amir about this at LUG.

            Amir pointed to queued patches, and suggested that the patches submitted for LU-11478 may address this. From my reading, I think that fixing LU-11478 will allow the problem to be corrected automatically, but I am not sure that it will actually prevent the problem from occurring.

            "The problem" being insertion of the wrong nid to a peer's list of NIs. I think there is a fair chance to find this through code inspection by someone more familiar with the peer discovery process. My suspicion is that a peer node id is held by an unlocked reference as an peer ni is inserted. I noticed LU-12264 in the queued patches - it looks very close, but I'm not sure that this specific problem is addressed.

            I suspect that the number of peer entries on the host is a factor, so duplication may not be possible without a large cluster.

            schamp Stephen Champion added a comment - I talked to Amir about this at LUG. Amir pointed to queued patches, and suggested that the patches submitted for LU-11478 may address this. From my reading, I think that fixing LU-11478 will allow the problem to be corrected automatically, but I am not sure that it will actually prevent the problem from occurring. "The problem" being insertion of the wrong nid to a peer's list of NIs. I think there is a fair chance to find this through code inspection by someone more familiar with the peer discovery process. My suspicion is that a peer node id is held by an unlocked reference as an peer ni is inserted. I noticed LU-12264 in the queued patches - it looks very close, but I'm not sure that this specific problem is addressed. I suspect that the number of peer entries on the host is a factor, so duplication may not be possible without a large cluster.
            ruth.klundt@gmail.com Ruth Klundt (Inactive) added a comment - - edited

            software stack described above.

            2 MDS nodes, 1 MDT each. 40 OSS nodes, 2 zfs OSTs each. module params:

            servers:
            options lnet networks=o2ib0(ib0)
            options ko2iblnd map_on_demand=16
            options ko2iblnd timeout=100
            options ko2iblnd credits=512
            clients:
            options lnet live_router_check_interval=60
            options lnet dead_router_check_interval=60
            options lnet check_routers_before_use=1
            options ko2iblnd timeout=100
            options ko2iblnd credits=512

            Steps before seeing this issue - basically reboot 2500+ nodes and some of them multiple times.

            Out of the 2500+ clients I found the error on 4 pairs of nodes, one pair on each of 4 distinct servers out of the 40 OSS nodes.

            Some relevant items are that:

            The servers and clients are running mellanox stack as listed above. As I said above, the servers have only one port into the IB network. The clients also have only one port, which appears as 2 due to the use of the mellanox socket direct feature. Only one of the ports is configured for use by lnet, and in fact only one (ib0) has an IPoIB address configured. So if the code is detecting another port automatically via some IB call and assuming it can be used that is an error. I can imagine that bad assumption leading to addition of an inappropriate nid from some list to the wrong client under high load conditions, exposing some concurrency flaw not previously seen. 

            It seems unlikely to be reproducible on a vm with tcp, and I'm afraid that a reproducer would have to come from HPE internal testing if possible at all. The machine where this occurred is in production with Dynamic Discovery disabled. It's likely that someone familiar with Dynamic Discovery is going to have to look at the code and decipher how that feature is doing the automatic detection.

            I think my point here is that nothing on this machine should have been detected as multirail. 

            If you have other more specific questions about the node configuration I'll be glad to answer what I can.

             

            ruth.klundt@gmail.com Ruth Klundt (Inactive) added a comment - - edited software stack described above. 2 MDS nodes, 1 MDT each. 40 OSS nodes, 2 zfs OSTs each. module params: servers: options lnet networks=o2ib0(ib0) options ko2iblnd map_on_demand=16 options ko2iblnd timeout=100 options ko2iblnd credits=512 clients: options lnet live_router_check_interval=60 options lnet dead_router_check_interval=60 options lnet check_routers_before_use=1 options ko2iblnd timeout=100 options ko2iblnd credits=512 Steps before seeing this issue - basically reboot 2500+ nodes and some of them multiple times. Out of the 2500+ clients I found the error on 4 pairs of nodes, one pair on each of 4 distinct servers out of the 40 OSS nodes. Some relevant items are that: The servers and clients are running mellanox stack as listed above. As I said above, the servers have only one port into the IB network. The clients also have only one port, which appears as 2 due to the use of the mellanox socket direct feature. Only one of the ports is configured for use by lnet, and in fact only one (ib0) has an IPoIB address configured. So if the code is detecting another port automatically via some IB call and assuming it can be used that is an error. I can imagine that bad assumption leading to addition of an inappropriate nid from some list to the wrong client under high load conditions, exposing some concurrency flaw not previously seen.  It seems unlikely to be reproducible on a vm with tcp, and I'm afraid that a reproducer would have to come from HPE internal testing if possible at all. The machine where this occurred is in production with Dynamic Discovery disabled. It's likely that someone familiar with Dynamic Discovery is going to have to look at the code and decipher how that feature is doing the automatic detection. I think my point here is that nothing on this machine should have been detected as multirail.  If you have other more specific questions about the node configuration I'll be glad to answer what I can.  

            I have asked Ruth Klundt <rklundt@sandia.gov> and Steve Champion <stephen.champion@hpe.com> to weigh in on this as they were the ones who configured and discovered the problem.

             

            Thanks.

            mtimmerman Mike Timmerman added a comment - I have asked Ruth Klundt <rklundt@sandia.gov> and Steve Champion <stephen.champion@hpe.com> to weigh in on this as they were the ones who configured and discovered the problem.   Thanks.

            Hi Mike,

            I tried to reproduce this on my local VM setup (with tcp) and could not reproduce the issue. When I run "lnetctl peer show", it showed nids of different peers under their own primary nid.

            Can you please list here how you have configured the nodes and also the exact steps you follow starting from configuration of the nodes that leads to the issue when you see nids of different peers showing up under the primary nid of 1 peer.

            Thanks

            sharmaso Sonia Sharma (Inactive) added a comment - Hi Mike, I tried to reproduce this on my local VM setup (with tcp) and could not reproduce the issue. When I run "lnetctl peer show", it showed nids of different peers under their own primary nid. Can you please list here how you have configured the nodes and also the exact steps you follow starting from configuration of the nodes that leads to the issue when you see nids of different peers showing up under the primary nid of 1 peer. Thanks

            Sonia,

             

            This bug reported as HP-235 and LU-12197 that were originally entered as minor for our customer. This has now become a major issue. Can you change the priority to major and has anything been done with the problem so far?

             

            Thanks.

             

            Mike

            mtimmerman Mike Timmerman added a comment - Sonia,   This bug reported as HP-235 and LU-12197 that were originally entered as minor for our customer. This has now become a major issue. Can you change the priority to major and has anything been done with the problem so far?   Thanks.   Mike
            pjones Peter Jones added a comment -

            Sonia

            Could you please assist with this issue?

            Thanks

            Peter

            pjones Peter Jones added a comment - Sonia Could you please assist with this issue? Thanks Peter
            pjones Peter Jones added a comment -

            Thanks Ruth

            pjones Peter Jones added a comment - Thanks Ruth

            People

              ashehata Amir Shehata (Inactive)
              ruth.klundt@gmail.com Ruth Klundt (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: