Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-3829

MDT mount fails if mkfs.lustre is run with multiple mgsnode arguments on MDSs where MGS is not running

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • Lustre 2.5.0, Lustre 2.4.2
    • Lustre 2.4.0, Lustre 2.5.0
    • None
    • 2
    • 9893

    Description

      If multiple --mgsnode arguments are provided to mkfs.lustre while formatting an MDT, then the mount of this MDT fails on the MDS where the MGS is not running.

      Reproduction Steps:
      Step 1) On MDS0, run the following script:
      mgs_dev='/dev/mapper/vg_v-mgs'
      mds0_dev='/dev/mapper/vg_v-mdt'

      mgs_pri_nid='10.10.11.210@tcp1'
      mgs_sec_nid='10.10.11.211@tcp1'

      mkfs.lustre --mgs --reformat $mgs_dev
      mkfs.lustre --mgsnode=$mgs_pri_nid --mgsnode=$mgs_sec_nid --failnode=$mgs_sec_nid --reformat --fsname=v --mdt --index=0 $mds0_dev

      mount -t lustre $mgs_dev /lustre/mgs/
      mount -t lustre $mds0_dev /lustre/v/mdt

      So the MGS and MDT0 will be mounted on MDS0.

      Step 2.1) On MDS1:
      mdt1_dev='/dev/mapper/vg_mdt1_v-mdt1'
      mdt2_dev='/dev/mapper/vg_mdt2_v-mdt2'

      mgs_pri_nid='10.10.11.210@tcp1'
      mgs_sec_nid='10.10.11.211@tcp1'

      mkfs.lustre --mgsnode=$mgs_pri_nid --mgsnode=$mgs_sec_nid --failnode=$mgs_pri_nid --reformat --fsname=v --mdt --index=1 $mdt1_dev # Does not mount.

      mount -t lustre $mdt1_dev /lustre/v/mdt1

      The mount of MDT1 will fail with the following error:
      mount.lustre: mount /dev/mapper/vg_mdt1_v-mdt1 at /lustre/v/mdt1 failed: Input/output error
      Is the MGS running?

      These are messages from Lustre logs while trying to mount MDT1:
      LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. quota=on. Opts:
      LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. quota=on. Opts:
      LDISKFS-fs (dm-20): mounted filesystem with ordered data mode. quota=on. Opts:
      Lustre: 7564:0:(client.c:1896:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1377197751/real 1377197751] req@ffff880027956c00 x1444089351391184/t0(0) o250->MGC10.10.11.210@tcp1@0@lo:26/25 lens 400/544 e 0 to 1 dl 1377197756 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
      LustreError: 8059:0:(client.c:1080:ptlrpc_import_delay_req()) @@@ send limit expired req@ffff880027956800 x1444089351391188/t0(0) o253->MGC10.10.11.210@tcp1@0@lo:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
      LustreError: 15f-b: v-MDT0001: cannot register this server with the MGS: rc = -5. Is the MGS running?
      LustreError: 8059:0:(obd_mount_server.c:1732:server_fill_super()) Unable to start targets: -5
      LustreError: 8059:0:(obd_mount_server.c:848:lustre_disconnect_lwp()) v-MDT0000-lwp-MDT0001: Can't end config log v-client.
      LustreError: 8059:0:(obd_mount_server.c:1426:server_put_super()) v-MDT0001: failed to disconnect lwp. (rc=-2)
      LustreError: 8059:0:(obd_mount_server.c:1456:server_put_super()) no obd v-MDT0001
      LustreError: 8059:0:(obd_mount_server.c:137:server_deregister_mount()) v-MDT0001 not registered
      Lustre: server umount v-MDT0001 complete
      LustreError: 8059:0:(obd_mount.c:1277:lustre_fill_super()) Unable to mount (-5)

      Step 2.2) On MDS1:
      mdt1_dev='/dev/mapper/vg_mdt1_v-mdt1'
      mdt2_dev='/dev/mapper/vg_mdt2_v-mdt2'

      mgs_pri_nid='10.10.11.210@tcp1'
      mgs_sec_nid='10.10.11.211@tcp1'

      mkfs.lustre --mgsnode=$mgs_pri_nid --failnode=$mgs_pri_nid --reformat --fsname=v --mdt --index=1 $mdt1_dev

      mount -t lustre $mdt1_dev /lustre/v/mdt1

      With this MDT1 will mount successfully. The only difference is that second "--mgsnode" is not provided during mkfs.lustre.

      Step 3: On MDS1 again:
      mkfs.lustre --mgsnode=$mgs_pri_nid --mgsnode=$mgs_sec_nid --failnode=$mgs_pri_nid --reformat --fsname=v --mdt --index=2 $mdt2_dev
      mount -t lustre $mdt2_dev /lustre/v/mdt2

      Once MDT1 is mounted, then using a second "--mgsnode" option works without any errors and mount of MDT2 succeeds.

      Lustre Versions: Reproducible on 2.4.0 and 2.4.91 versions.

      Conclusion: Due to this bug, MDTs do not mount on MDSs that are not running the MGS. With the workaround, HA will not be properly configured.
      Also note that this issue is not related to DNE. Same issue and "workaround" applies to an MDT of a different filesystem on MDS1 as well.

      Attachments

        Issue Links

          Activity

            People

              bobijam Zhenyu Xu
              kalpak Kalpak Shah (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: