Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12274

Clients aren't connecting to OST defined failover.node

Details

    • Question/Request
    • Resolution: Unresolved
    • Blocker
    • None
    • Lustre 2.10.2
    • None
    • Servers: Lustre-2.10.2, Kernel: 3.10.0-693.5.2.el7_lustre.x86_64
      Clients: Lustre-2.10.3, Kernel: 3.10.0-693.21.1.el7.x86_64
      Client/Server OS: CentOS Linux release 7.4.1708

    Description

      I tried running tunefs.lustre and successfully changed the failover NIDs to what they should be. This problem is happening on several OSTs, but fixing one should fix them all.

      I'm assuming I forgot a step when I ran tunefs.lustre.

      tunefs.lustre --erase-param failover.node --param failover.node=172.17.1.103@o2ib,172.16.1.103@tcp1 /dev/mapper/mpathg

      The OST OST0017 is mounted on 172.17.1.103 with the following parameters:

      [root@apslstr03 ~]# tunefs.lustre --dryrun /dev/mapper/mpathg
      checking for existing Lustre data: found
      Reading CONFIGS/mountdata

         Read previous values:
      Target:     lustrefc-OST0017
      Index:      23
      Lustre FS:  lustrefc
      Mount type: ldiskfs
      Flags:      0x2
                    (OST )
      Persistent mount opts: ,errors=remount-ro
      Parameters:  failover.node=172.17.1.103@o2ib,172.16.1.103@tcp1 mgsnode=172.17.1.112@o2ib,172.16.1.112@tcp1 mgsnode=172.17.1.113@o2ib,172.16.1.113@tcp1

         Permanent disk data:
      Target:     lustrefc-OST0017
      Index:      23
      Lustre FS:  lustrefc
      Mount type: ldiskfs
      Flags:      0x2
                    (OST )
      Persistent mount opts: ,errors=remount-ro
      Parameters:  failover.node=172.17.1.103@o2ib,172.16.1.103@tcp1 mgsnode=172.17.1.112@o2ib,172.16.1.112@tcp1 mgsnode=172.17.1.113@o2ib,172.16.1.113@tcp1

      exiting before disk write.
      [root@apslstr03 ~]#

      However, the clients are still displaying errors like this:

      May  8 11:43:33 localhost kernel: Lustre: 2028:0:(client.c:2114:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1557333772/real 0] req@ffff880bd9296f00 x1632920191594624/t0(0) o8->lustrefc-OST0017-osc-ffff8817ef372000@172.17.1.106@o2ib:28/4 lens 520/544 e 0 to 1 dl 1557333813 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
      May  8 11:43:33 localhost kernel: Lustre: 2028:0:(client.c:2114:ptlrpc_expire_one_request()) Skipped 65 previous similar messages
      May  8 11:45:26 localhost kernel: LNet: 1994:0:(o2iblnd_cb.c:3192:kiblnd_check_conns()) Timed out tx for 172.17.1.106@o2ib: 3 seconds
      May  8 11:45:26 localhost kernel: LNet: 1994:0:(o2iblnd_cb.c:3192:kiblnd_check_conns()) Skipped 39 previous similar messages

      Attachments

        Activity

          [LU-12274] Clients aren't connecting to OST defined failover.node

          Good to hear!

          If you need to change a primary nid, I would advise to follow the dedicated instructions in the Lustre Operations Manual:
          http://doc.lustre.org/lustre_manual.xhtml#dbdoclet.changingservernid

          sebastien Sebastien Buisson added a comment - Good to hear! If you need to change a primary nid, I would advise to follow the dedicated instructions in the Lustre Operations Manual: http://doc.lustre.org/lustre_manual.xhtml#dbdoclet.changingservernid
          rs1 Roger Sersted added a comment - - edited

          Great catch on that.  I went through all of my OSTs and remounted and checked the failover setting.  I then remounted to the HA partner and the filesystem is working.  Quick question, "How would I change primary NID of an OST?"  Would I specify the "servicenode" option?

          rs1 Roger Sersted added a comment - - edited Great catch on that.  I went through all of my OSTs and remounted and checked the failover setting.  I then remounted to the HA partner and the filesystem is working.  Quick question, "How would I change primary NID of an OST?"  Would I specify the "servicenode" option?
          sebastien Sebastien Buisson added a comment - - edited

          I can see in the llog_reader output that target lustrefc-OST0017 for instance is still registered with NIDs 172.17.1.105 and 172.17.1.106. It explains the error messages on the clients.

          I just noticed this important message in the Lustre Operations Manual:

          If a --failnode option is added to a target to designate a failover server for the target, the
          target must be re-mounted on the primary node before the --failnode option takes effect
          

          So the problem you are facing could be due to the fact that after tunefs.lustre, the target was directly mounted on the failover node. For instance on 172.17.1.103 for target lustrefc-OST0017. Does it make sense?

          Targets should be mounted on the primary node right after a tunefs.lustre that changes the failnodes, and only after that failed over to a secondary node.

          sebastien Sebastien Buisson added a comment - - edited I can see in the llog_reader output that target lustrefc-OST0017 for instance is still registered with NIDs 172.17.1.105 and 172.17.1.106. It explains the error messages on the clients. I just noticed this important message in the Lustre Operations Manual: If a --failnode option is added to a target to designate a failover server for the target, the target must be re-mounted on the primary node before the --failnode option takes effect So the problem you are facing could be due to the fact that after tunefs.lustre, the target was directly mounted on the failover node. For instance on 172.17.1.103 for target lustrefc-OST0017. Does it make sense? Targets should be mounted on the primary node right after a tunefs.lustre that changes the failnodes, and only after that failed over to a secondary node.

          I'm in the process of updating the Lustre servers.  I unmounted the OSTs on one set of servers and mounted them on to their HA partners.  I have done this in the past with one node and it worked fine.  I have 6 OSSes configured in HA pairs.  I am not running any HA software.  If a server fails, I manually unmount and then mount to the HA partner. 

          rs1 Roger Sersted added a comment - I'm in the process of updating the Lustre servers.  I unmounted the OSTs on one set of servers and mounted them on to their HA partners.  I have done this in the past with one node and it worked fine.  I have 6 OSSes configured in HA pairs.  I am not running any HA software.  If a server fails, I manually unmount and then mount to the HA partner. 

          As the name suggests, failover node parameter serves the purpose of specifying failover NIDs for targets. It does not reflect the primary NID of a target.

          So now that you mention that your cluster is down, I am wondering if your targets have been moved so that their primary NID is now different. If the targets did not move and they all run on their primary node, then this problem with the failover node change should not lead to any downtime.

          sebastien Sebastien Buisson added a comment - As the name suggests, failover node parameter serves the purpose of specifying failover NIDs for targets. It does not reflect the primary NID of a target. So now that you mention that your cluster is down, I am wondering if your targets have been moved so that their primary NID is now different. If the targets did not move and they all run on their primary node, then this problem with the failover node change should not lead to any downtime.
          rs1 Roger Sersted added a comment - - edited

          I have attached the requested output. I should add, my cluster is down due to this problem.

          rs1 Roger Sersted added a comment - - edited I have attached the requested output. I should add, my cluster is down due to this problem.

          I think we will need to have a look at the Lustre Logs on the MGS.
          Could you please run the following commands on the MGS and attach the lustrefc-client.txt file to this ticket (output of llog_reader)?

          Assuming your Lustre file system name is lustrefc, that would be:

          mgs# debugfs -c -R 'dump CONFIGS/lustrefc-client /tmp/lustrefc-client' <mgt device>
          mgs# llog_reader /tmp/lustrefc-client > /tmp/lustrefc-client.txt
          

          Thanks.

          sebastien Sebastien Buisson added a comment - I think we will need to have a look at the Lustre Logs on the MGS. Could you please run the following commands on the MGS and attach the lustrefc-client.txt file to this ticket (output of llog_reader)? Assuming your Lustre file system name is lustrefc, that would be: mgs# debugfs -c -R 'dump CONFIGS/lustrefc-client /tmp/lustrefc-client' <mgt device> mgs# llog_reader /tmp/lustrefc-client > /tmp/lustrefc-client.txt Thanks.

          I unmounted the OSTs inquestion.  OSTs not being modified were mounted.  The MDT and MGT were both mounted.

          rs1 Roger Sersted added a comment - I unmounted the OSTs inquestion.  OSTs not being modified were mounted.  The MDT and MGT were both mounted.

          Hi,

          Did you run the tunes.lustre commands while the targets were stopped (ie unmounted)?

          sebastien Sebastien Buisson added a comment - Hi, Did you run the tunes.lustre commands while the targets were stopped (ie unmounted)?
          pjones Peter Jones added a comment -

          Sebastien

          Could you please advise here?

          Thanks

          Peter

          pjones Peter Jones added a comment - Sebastien Could you please advise here? Thanks Peter

          People

            sebastien Sebastien Buisson
            rs1 Roger Sersted
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: