[LU-6531] Fujitsu's o2iblnd Channel Bonding Solution Created: 27/Apr/15  Updated: 16/Sep/17  Due: 30/Jun/15  Resolved: 16/Sep/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major
Reporter: Amir Shehata (Inactive) Assignee: Amir Shehata (Inactive)
Resolution: Won't Do Votes: 0
Labels: None

Attachments: PDF File Fujitsu_Channel_Bonding.pdf    
Issue Links:
Duplicate
duplicates LU-9480 LNet Dynamic Discovery Resolved
Related
is related to LU-6480 leak cmid in kiblnd_dev_need_failover Resolved
Rank (Obsolete): 9223372036854775807

 Description   

Work on Fujitsu's o2iblnd channel bonding solution.



 Comments   
Comment by Amir Shehata (Inactive) [ 27/Apr/15 ]

Duplicate of LU-6495

Comment by Gerrit Updater [ 28/Apr/15 ]

Amir Shehata (amir.shehata@intel.com) uploaded a new patch: http://review.whamcloud.com/14625
Subject: LU-6531 lnet: Fujitsu's Channel Bonding Solution
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 2528a0d791fe8c6f2b046905725d12ec56c9bf6a

Comment by Hiroya Nozaki [ 22/May/15 ]

Sorry I made a mistake. Please forget about the above patch.

Comment by Gerrit Updater [ 06/Jun/15 ]

Amir Shehata (amir.shehata@intel.com) uploaded a new patch: http://review.whamcloud.com/15170
Subject: LU-6531 lnet: DLC interface for o2iblnd Channel Bonding
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: db91f5a2fb066a316e383807ad8fec1633237a55

Comment by Frederic Saunier [ 12/Jun/15 ]

I've experimented the lnet channel bonding solution patch using lnet-selftest with following configuration:

  • 4 clients having a single IB interface
  • 1 server having two IB interfaces
  • all IB interfaces are connected to same switch
    Here are the test results:

    size=1M duration=10 check= concurrency=16
    Lnet data bandwidth of all the servers (MB/s)

    #clients write read
    1 5610 5816
    2 10773 10155
    3 12069 6358
    4 12044 6332

    Write figures are good with respect with hardware capabilities, but I'm puzzled by figures for read.

Comment by Amir Shehata (Inactive) [ 18/Jun/15 ]

hi Frederic,

Is it possible to share your full configuration?

thanks
amir

Comment by Frederic Saunier [ 23/Jun/15 ]

On server's lnet.conf is:
options lnet networks=o2ib0(ib0,ib1)
And each client has the following lnet.conf
options lnet networks=o2ib0(ib0)
All nodes were using the same lustre_o2ibs_config input file:
10.1.0.41@o2ib0 10.1.0.41 10.1.0.101
10.1.0.31@o2ib0 10.1.0.31
10.1.0.32@o2ib0 10.1.0.32
10.1.0.35@o2ib0 10.1.0.35
I also tried changing ko2iblnd parameters on all nodes:
options ko2iblnd credits=2048 peer_credits=126 concurrent_sends=63 peer_buffer_credits=128

Comment by Amir Shehata (Inactive) [ 24/Jun/15 ]

This is a high-level design document I wrote based on the Fujitsu Channel Bonding solution. It also describes the new DLC interface I added to configure the Channel bonding solution.

Comment by Olaf Weber [ 25/Jun/15 ]

The design document is very useful, thanks.

I do have one concern: the code looks through lists of routes while holding a spinlock and with interrupts disabled on the CPU (spin_lock_irqsave() and friends). This will definitely be a problem if these lists become large, because a system becomes unstable if one or more of the CPU cores runs for a long time with interrupts disabled.

Trying to figure how large these lists can become, if we have a cluster with N clients, M MDS, O OSS, and ignoring routers, assuming just one interface for each system I get something like this:

  • on a client: M + O
  • on an MDS: N + M + O
  • on an OSS: N + M

This shouldn't be much of a problem in a small cluster, but in a large cluster it would be the MDS and OSS in particular that have large lists. So my concern is that there is a scaling problem that will render MDS and OSS unstable in large clusters, but will be invisible in the small clusters typically used for testing.

Comment by Christopher Morrone [ 25/Jun/15 ]

To take a step back for a moment, I think we need to have a good answer to the following question:

Why is implementing channel bonding at the LND level the right thing to do rather than implementing channel bonding at the LNet level?

It is not clear to me that the current configuration approach when done at the LND level is very robust or system administration friendly, and ways to fix that don't seem terribly easy to do since this implementation is all hacked into a single LND component. I think that I can envision a system at the LNet level that would be much easier for system administrators to work with (because NIDs are already shared between nodes). I am also concerned about how credits at the LNet layer are going to interact with multiple invisible peer connections in the LND layer.

Comment by Doug Oucharek (Inactive) [ 30/Jun/15 ]

Hi Chris,

I agree with the validity of your question/concerns. I would add one other potential issue with this solution: it does not allow the bonded HCAs to be on different networks. That means it cannot protect you from a switch failure. An LNet layer solution could be developed to allow HCAs on different networks to be bonded.

We we first started looking at channel bonding, the solution was based in the LNet layer. That was put on hold when the Fujitsu solution came to light as something already running in production and being offered to the community tree. The philosophy: we are better off with a working known than putting in the effort re-doing a new solution (an unknown). Also, having two solutions would muddy the Lustre waters so with the Fujitsu solution coming to the community tree, we backed off our own approach.

Now, seeing the patch comments and your concerns, I feel we should re-review which solution is favoured for the community tree.

For now, Intel is going to back off pushing this patch (and the DLC adaptation patch) given 1- it won't make 2.8 feature freeze, and 2- a need to reconsider what is the proper solution. The patches are left in Gerrit for community consideration and guidance as to one way to approach Channel Bonding.

Comment by Andreas Dilger [ 16/Sep/17 ]

Closing this ticket, as LNet multi-rail support landed in 2.10.

Generated at Sat Feb 10 02:01:01 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.