Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5420

Failure on test suite sanity test_17m: mount MDS failed, Input/output error

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.8.0
    • Lustre 2.6.0, Lustre 2.7.0
    • client and server: lustre-b2_6-rc2 RHEL6 ldiskfs DNE mode
    • 3
    • 15076

    Description

      This issue was created by maloo for sarah <sarah@whamcloud.com>

      This issue relates to the following test suite run: https://testing.hpdd.intel.com/test_sets/16302020-14ed-11e4-bb6a-5254006e85c2.

      The sub-test test_17m failed with the following error:

      test failed to respond and timed out

      Hit this bug in many tests, the env is configured as 1 MDS with 2 MDTs. Didn't hit this error when the configuration is 2 MDSs with 2 MDTs
      client console:

      CMD: onyx-46vm7 mkdir -p /mnt/mds1
      CMD: onyx-46vm7 test -b /dev/lvm-Role_MDS/P1
      Starting mds1:   /dev/lvm-Role_MDS/P1 /mnt/mds1
      CMD: onyx-46vm7 mkdir -p /mnt/mds1; mount -t lustre   		                   /dev/lvm-Role_MDS/P1 /mnt/mds1
      onyx-46vm7: mount.lustre: mount /dev/mapper/lvm--Role_MDS-P1 at /mnt/mds1 failed: Input/output error
      onyx-46vm7: Is the MGS running?
      Start of /dev/lvm-Role_MDS/P1 on mds1 failed 5
      

      Attachments

        Issue Links

          Activity

            [LU-5420] Failure on test suite sanity test_17m: mount MDS failed, Input/output error

            Oleg Drokin (oleg.drokin@intel.com) uploaded a new patch: http://review.whamcloud.com/13838
            Subject: LU-5420 revert part of LU-4913
            Project: fs/lustre-release
            Branch: b2_7
            Current Patch Set: 1
            Commit: 77856caa2468dd69cfa5796bceb22c32aacf402f

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) uploaded a new patch: http://review.whamcloud.com/13838 Subject: LU-5420 revert part of LU-4913 Project: fs/lustre-release Branch: b2_7 Current Patch Set: 1 Commit: 77856caa2468dd69cfa5796bceb22c32aacf402f

            Oleg Drokin (oleg.drokin@intel.com) uploaded a new patch: http://review.whamcloud.com/13832
            Subject: LU-5420 revert part of LU-4913
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 260e150f98f07fa68fb124348ca9540e77fed100

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) uploaded a new patch: http://review.whamcloud.com/13832 Subject: LU-5420 revert part of LU-4913 Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 260e150f98f07fa68fb124348ca9540e77fed100
            jlevi Jodi Levi (Inactive) added a comment - http://review.whamcloud.com/#/c/12515/

            Alexey Lyashkov (alexey.lyashkov@seagate.com) uploaded a new patch: http://review.whamcloud.com/13693
            Subject: LU-5420 mgc: fix reconnect
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: ccfca18ad2ae9acb84dbfc4c0b2217bd10a0589d

            gerrit Gerrit Updater added a comment - Alexey Lyashkov (alexey.lyashkov@seagate.com) uploaded a new patch: http://review.whamcloud.com/13693 Subject: LU-5420 mgc: fix reconnect Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: ccfca18ad2ae9acb84dbfc4c0b2217bd10a0589d

            As a note I don't see this is my regular RHEL testing but I can constantly reproduce this problem with my 3.12 kernel setup. This is with the MGS and MDS each being on separate nodes.

            simmonsja James A Simmons added a comment - As a note I don't see this is my regular RHEL testing but I can constantly reproduce this problem with my 3.12 kernel setup. This is with the MGS and MDS each being on separate nodes.
            di.wang Di Wang added a comment -

            Hmm, I thought there are two problems here
            1. it is not just MGS and MDT are not share the same node, if several targets are sharing the same mgc, you will meet similar problem, because after http://review.whamcloud.com/#/c/9967 is landed, we can not make sure the import is FULL before mgc enqueue lock and retrieve logs, unless the MGC is new.
            2. How can we make sure the local config log is stale or not. I think that is the reason we saw LU-5658, where the local config is stale.

            di.wang Di Wang added a comment - Hmm, I thought there are two problems here 1. it is not just MGS and MDT are not share the same node, if several targets are sharing the same mgc, you will meet similar problem, because after http://review.whamcloud.com/#/c/9967 is landed, we can not make sure the import is FULL before mgc enqueue lock and retrieve logs, unless the MGC is new. 2. How can we make sure the local config log is stale or not. I think that is the reason we saw LU-5658 , where the local config is stale.

            In seagate these bug is occurred only when MDT and MGS are on same node, in case when mdt starts earlier than mgs.

            When MDT is on separate node it uses LOCAL configuration if can't retrieve it from MGS.
            But when MDT and MGS are on same node MDT can't use LOCAL configuration:

                    /* Copy the setup log locally if we can. Don't mess around if we're
                     * running an MGS though (logs are already local). */
                    if (lctxt && lsi && IS_SERVER(lsi) && !IS_MGS(lsi) &&
                        cli->cl_mgc_configs_dir != NULL &&
                        lu2dt_dev(cli->cl_mgc_configs_dir->do_lu.lo_dev) ==
                        lsi->lsi_dt_dev) {
            ....
                   } else {
                            if (local_only) /* no local log at client side */
                                    GOTO(out_pop, rc = -EIO);
                    }
            
            scherementsev Sergey Cheremencev added a comment - In seagate these bug is occurred only when MDT and MGS are on same node, in case when mdt starts earlier than mgs. When MDT is on separate node it uses LOCAL configuration if can't retrieve it from MGS. But when MDT and MGS are on same node MDT can't use LOCAL configuration: /* Copy the setup log locally if we can. Don't mess around if we're * running an MGS though (logs are already local). */ if (lctxt && lsi && IS_SERVER(lsi) && !IS_MGS(lsi) && cli->cl_mgc_configs_dir != NULL && lu2dt_dev(cli->cl_mgc_configs_dir->do_lu.lo_dev) == lsi->lsi_dt_dev) { .... } else { if (local_only) /* no local log at client side */ GOTO(out_pop, rc = -EIO); }
            yujian Jian Yu added a comment - While running replay-dual tests on master branch with MDSCOUNT=4, the same failure occurred: https://testing.hpdd.intel.com/test_sets/33dfc794-6dba-11e4-9d65-5254006e85c2 https://testing.hpdd.intel.com/test_sets/5cb7b7f8-6dba-11e4-9d65-5254006e85c2

            Sergey Cheremencev (sergey_cheremencev@xyratex.com) uploaded a new patch: http://review.whamcloud.com/12515
            Subject: LU-5420 mgc: process config logs only in mgc_requeue_thread()
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 3
            Commit: 1c3148dd8645cfa94bf3c36cfbe41176334ad4c5

            gerrit Gerrit Updater added a comment - Sergey Cheremencev (sergey_cheremencev@xyratex.com) uploaded a new patch: http://review.whamcloud.com/12515 Subject: LU-5420 mgc: process config logs only in mgc_requeue_thread() Project: fs/lustre-release Branch: master Current Patch Set: 3 Commit: 1c3148dd8645cfa94bf3c36cfbe41176334ad4c5

            Hello

            We hit these problem in xyratex and have another solution http://review.whamcloud.com/#/c/12515/.
            Hope it could be helpful.

            scherementsev Sergey Cheremencev added a comment - Hello We hit these problem in xyratex and have another solution http://review.whamcloud.com/#/c/12515/ . Hope it could be helpful.
            di.wang Di Wang added a comment -

            Just updated the patch.

            di.wang Di Wang added a comment - Just updated the patch.

            People

              di.wang Di Wang
              sarah Sarah Liu
              Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: