Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6531

Fujitsu's o2iblnd Channel Bonding Solution

Details

    • New Feature
    • Resolution: Won't Do
    • Major
    • None
    • None
    • None
    • 9223372036854775807

    Description

      Work on Fujitsu's o2iblnd channel bonding solution.

      Attachments

        Issue Links

          Activity

            [LU-6531] Fujitsu's o2iblnd Channel Bonding Solution

            Closing this ticket, as LNet multi-rail support landed in 2.10.

            adilger Andreas Dilger added a comment - Closing this ticket, as LNet multi-rail support landed in 2.10.

            Hi Chris,

            I agree with the validity of your question/concerns. I would add one other potential issue with this solution: it does not allow the bonded HCAs to be on different networks. That means it cannot protect you from a switch failure. An LNet layer solution could be developed to allow HCAs on different networks to be bonded.

            We we first started looking at channel bonding, the solution was based in the LNet layer. That was put on hold when the Fujitsu solution came to light as something already running in production and being offered to the community tree. The philosophy: we are better off with a working known than putting in the effort re-doing a new solution (an unknown). Also, having two solutions would muddy the Lustre waters so with the Fujitsu solution coming to the community tree, we backed off our own approach.

            Now, seeing the patch comments and your concerns, I feel we should re-review which solution is favoured for the community tree.

            For now, Intel is going to back off pushing this patch (and the DLC adaptation patch) given 1- it won't make 2.8 feature freeze, and 2- a need to reconsider what is the proper solution. The patches are left in Gerrit for community consideration and guidance as to one way to approach Channel Bonding.

            doug Doug Oucharek (Inactive) added a comment - Hi Chris, I agree with the validity of your question/concerns. I would add one other potential issue with this solution: it does not allow the bonded HCAs to be on different networks. That means it cannot protect you from a switch failure. An LNet layer solution could be developed to allow HCAs on different networks to be bonded. We we first started looking at channel bonding, the solution was based in the LNet layer. That was put on hold when the Fujitsu solution came to light as something already running in production and being offered to the community tree. The philosophy: we are better off with a working known than putting in the effort re-doing a new solution (an unknown). Also, having two solutions would muddy the Lustre waters so with the Fujitsu solution coming to the community tree, we backed off our own approach. Now, seeing the patch comments and your concerns, I feel we should re-review which solution is favoured for the community tree. For now, Intel is going to back off pushing this patch (and the DLC adaptation patch) given 1- it won't make 2.8 feature freeze, and 2- a need to reconsider what is the proper solution. The patches are left in Gerrit for community consideration and guidance as to one way to approach Channel Bonding.

            To take a step back for a moment, I think we need to have a good answer to the following question:

            Why is implementing channel bonding at the LND level the right thing to do rather than implementing channel bonding at the LNet level?

            It is not clear to me that the current configuration approach when done at the LND level is very robust or system administration friendly, and ways to fix that don't seem terribly easy to do since this implementation is all hacked into a single LND component. I think that I can envision a system at the LNet level that would be much easier for system administrators to work with (because NIDs are already shared between nodes). I am also concerned about how credits at the LNet layer are going to interact with multiple invisible peer connections in the LND layer.

            morrone Christopher Morrone (Inactive) added a comment - - edited To take a step back for a moment, I think we need to have a good answer to the following question: Why is implementing channel bonding at the LND level the right thing to do rather than implementing channel bonding at the LNet level? It is not clear to me that the current configuration approach when done at the LND level is very robust or system administration friendly, and ways to fix that don't seem terribly easy to do since this implementation is all hacked into a single LND component. I think that I can envision a system at the LNet level that would be much easier for system administrators to work with (because NIDs are already shared between nodes). I am also concerned about how credits at the LNet layer are going to interact with multiple invisible peer connections in the LND layer.

            The design document is very useful, thanks.

            I do have one concern: the code looks through lists of routes while holding a spinlock and with interrupts disabled on the CPU (spin_lock_irqsave() and friends). This will definitely be a problem if these lists become large, because a system becomes unstable if one or more of the CPU cores runs for a long time with interrupts disabled.

            Trying to figure how large these lists can become, if we have a cluster with N clients, M MDS, O OSS, and ignoring routers, assuming just one interface for each system I get something like this:

            • on a client: M + O
            • on an MDS: N + M + O
            • on an OSS: N + M

            This shouldn't be much of a problem in a small cluster, but in a large cluster it would be the MDS and OSS in particular that have large lists. So my concern is that there is a scaling problem that will render MDS and OSS unstable in large clusters, but will be invisible in the small clusters typically used for testing.

            olaf Olaf Weber (Inactive) added a comment - The design document is very useful, thanks. I do have one concern: the code looks through lists of routes while holding a spinlock and with interrupts disabled on the CPU (spin_lock_irqsave() and friends). This will definitely be a problem if these lists become large, because a system becomes unstable if one or more of the CPU cores runs for a long time with interrupts disabled. Trying to figure how large these lists can become, if we have a cluster with N clients, M MDS, O OSS, and ignoring routers, assuming just one interface for each system I get something like this: on a client: M + O on an MDS: N + M + O on an OSS: N + M This shouldn't be much of a problem in a small cluster, but in a large cluster it would be the MDS and OSS in particular that have large lists. So my concern is that there is a scaling problem that will render MDS and OSS unstable in large clusters, but will be invisible in the small clusters typically used for testing.

            This is a high-level design document I wrote based on the Fujitsu Channel Bonding solution. It also describes the new DLC interface I added to configure the Channel bonding solution.

            ashehata Amir Shehata (Inactive) added a comment - This is a high-level design document I wrote based on the Fujitsu Channel Bonding solution. It also describes the new DLC interface I added to configure the Channel bonding solution.

            On server's lnet.conf is:
            options lnet networks=o2ib0(ib0,ib1)
            And each client has the following lnet.conf
            options lnet networks=o2ib0(ib0)
            All nodes were using the same lustre_o2ibs_config input file:
            10.1.0.41@o2ib0 10.1.0.41 10.1.0.101
            10.1.0.31@o2ib0 10.1.0.31
            10.1.0.32@o2ib0 10.1.0.32
            10.1.0.35@o2ib0 10.1.0.35
            I also tried changing ko2iblnd parameters on all nodes:
            options ko2iblnd credits=2048 peer_credits=126 concurrent_sends=63 peer_buffer_credits=128

            FSaunier Frederic Saunier (Inactive) added a comment - On server's lnet.conf is: options lnet networks=o2ib0(ib0,ib1) And each client has the following lnet.conf options lnet networks=o2ib0(ib0) All nodes were using the same lustre_o2ibs_config input file: 10.1.0.41@o2ib0 10.1.0.41 10.1.0.101 10.1.0.31@o2ib0 10.1.0.31 10.1.0.32@o2ib0 10.1.0.32 10.1.0.35@o2ib0 10.1.0.35 I also tried changing ko2iblnd parameters on all nodes: options ko2iblnd credits=2048 peer_credits=126 concurrent_sends=63 peer_buffer_credits=128

            hi Frederic,

            Is it possible to share your full configuration?

            thanks
            amir

            ashehata Amir Shehata (Inactive) added a comment - hi Frederic, Is it possible to share your full configuration? thanks amir

            I've experimented the lnet channel bonding solution patch using lnet-selftest with following configuration:

            • 4 clients having a single IB interface
            • 1 server having two IB interfaces
            • all IB interfaces are connected to same switch
              Here are the test results:

              size=1M duration=10 check= concurrency=16
              Lnet data bandwidth of all the servers (MB/s)

              #clients write read
              1 5610 5816
              2 10773 10155
              3 12069 6358
              4 12044 6332

              Write figures are good with respect with hardware capabilities, but I'm puzzled by figures for read.

            FSaunier Frederic Saunier (Inactive) added a comment - I've experimented the lnet channel bonding solution patch using lnet-selftest with following configuration: 4 clients having a single IB interface 1 server having two IB interfaces all IB interfaces are connected to same switch Here are the test results: size=1M duration=10 check= concurrency=16 Lnet data bandwidth of all the servers (MB/s) #clients write read 1 5610 5816 2 10773 10155 3 12069 6358 4 12044 6332 Write figures are good with respect with hardware capabilities, but I'm puzzled by figures for read.

            Amir Shehata (amir.shehata@intel.com) uploaded a new patch: http://review.whamcloud.com/15170
            Subject: LU-6531 lnet: DLC interface for o2iblnd Channel Bonding
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: db91f5a2fb066a316e383807ad8fec1633237a55

            gerrit Gerrit Updater added a comment - Amir Shehata (amir.shehata@intel.com) uploaded a new patch: http://review.whamcloud.com/15170 Subject: LU-6531 lnet: DLC interface for o2iblnd Channel Bonding Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: db91f5a2fb066a316e383807ad8fec1633237a55

            Sorry I made a mistake. Please forget about the above patch.

            nozaki Hiroya Nozaki (Inactive) added a comment - Sorry I made a mistake. Please forget about the above patch.

            People

              ashehata Amir Shehata (Inactive)
              ashehata Amir Shehata (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              20 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: