Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11241

Unable to umount during MDT fail back

Details

    • Bug
    • Resolution: Unresolved
    • Minor
    • None
    • Lustre 2.10.5
    • 3
    • 9223372036854775807

    Description

      env: 2.10.5-RC1 b2_10-ib build #70

      During soak test, I found the failover MDT(MDT3) can not be umounted on MDS2, which cause the MDT can not be fall back to the original MDS(MDS3), then the whole recovery process stuck.

      MDS 2 console

      LustreError: 0-0: Forced cleanup waiting for soaked-MDT0000-osp-MDT0003 namespace with 3 resources in use, (rc=-110)
       Lustre: 3899:0:(client.c:2114:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1534181819/real 1534181819]  req@ffffa06fd3332a00 x1608657214545248/t0(0) o38->soaked-MDT0003-osp-MDT0002@192.168.1.111@o2ib:24/4 lens 520/544 e 0 to 1 dl 1534181830 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1^M
      [58161.752944] Lustre: 3899:0:(client.c:2114:ptlrpc_expire_one_request()) Skipped 6 previous similar messages^M
      LustreError: 0-0: Forced cleanup waiting for soaked-MDT0000-osp-MDT0003 namespace with 3 resources in use, (rc=-110)
      LustreError: 0-0: Forced cleanup waiting for soaked-MDT0000-osp-MDT0003 namespace with 3 resources in use, (rc=-110)
      

      The happened after 30+ hours running (36 times successful recovery), and it only failover MDS during the test.

      On MDS0, I can not see the stack trace in console log but found this, is this an known issue?

      [52069.665828] LNet: 3948:0:(linux-debug.c:185:libcfs_call_trace()) can't show stack: kernel doesn't export show_task^M
      [52069.681120] LNet: 3948:0:(linux-debug.c:185:libcfs_call_trace()) Skipped 3 previous similar messages^M
      [52069.694717] LustreError: dumping log to /tmp/lustre-log.1534184159.5310^M
      

      Also there are many multipath error seen on all MDS during failover. BTW, when the system first time mounted before soak started, this kind of error caused MDS2 mount hung, but after reboot, it seems gone, not sure if it is related hardware problem.

      [64147.395418] device-mapper: multipath: Reinstating path 8:128.^M
      [64147.403572] device-mapper: multipath: Failing path 8:128.^M
      [64149.405305] device-mapper: multipath: Reinstating path 8:96.^M
      [64149.413410] device-mapper: multipath: Failing path 8:96.^M
      

      I will run the soak again with both MDS and OSS failover/restart enabled, and see if this problem can be reproduced.

      Attachments

        Issue Links

          Activity

            [LU-11241] Unable to umount during MDT fail back

            On MDS0, I can not see the stack trace in console log but found this, is this an known issue?

            [52069.665828] LNet: 3948:0:(linux-debug.c:185:libcfs_call_trace()) can't show stack: kernel doesn't export show_task^M
            [52069.681120] LNet: 3948:0:(linux-debug.c:185:libcfs_call_trace()) Skipped 3 previous similar messages^M
            [52069.694717] LustreError: dumping log to /tmp/lustre-log.1534184159.5310^M
            

            There was a change in RHEL7.5 that broke the ability of Lustre to dump a stack. This is fixed via LU-11062, which should be landing on b2_10 very soon. It would be worthwhile to restart the soak testing once that patch has landed, so that we get more information in case of a hang. Otherwise, we don't know why the MDS thread was stuck, which was preventing the unmount.

            adilger Andreas Dilger added a comment - On MDS0, I can not see the stack trace in console log but found this, is this an known issue? [52069.665828] LNet: 3948:0:(linux-debug.c:185:libcfs_call_trace()) can't show stack: kernel doesn't export show_task^M [52069.681120] LNet: 3948:0:(linux-debug.c:185:libcfs_call_trace()) Skipped 3 previous similar messages^M [52069.694717] LustreError: dumping log to /tmp/lustre-log.1534184159.5310^M There was a change in RHEL7.5 that broke the ability of Lustre to dump a stack. This is fixed via LU-11062 , which should be landing on b2_10 very soon. It would be worthwhile to restart the soak testing once that patch has landed, so that we get more information in case of a hang. Otherwise, we don't know why the MDS thread was stuck, which was preventing the unmount.
            sarah Sarah Liu added a comment -

            After reboot all servers, when mount the system, the problematic MDS(soak-10) hung at mount for 5 minutes, then it seemed pass. In normal case it takes secs.

            soak-10:

            [   41.369390] NFS: Registering the id_resolver key type
            [   41.375705] Key type id_resolver registered
            [   41.381041] Key type id_legacy registered
            [   61.439610] IPv6: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready
            [ 7664.773217] LNet: HW NUMA nodes: 2, HW CPU cores: 32, npartitions: 2
            [ 7664.784088] alg: No test for adler32 (adler32-zlib)
            [ 7665.685582] Lustre: Lustre: Build Version: 2.10.5_RC1_1_g574e63f
            [ 7665.990084] LNet: Added LNI 192.168.1.110@o2ib [8/256/0/180]
            [ 8661.888556] sd 0:0:0:44: [sdh] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
            [ 8661.897398] sd 0:0:0:44: [sdh] Sense Key : Illegal Request [current] 
            [ 8661.904672] sd 0:0:0:44: [sdh] <<vendor>>ASC=0x94 ASCQ=0x1 
            [ 8661.910955] sd 0:0:0:44: [sdh] CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00
            [ 8661.920451] blk_update_request: I/O error, dev sdh, sector 0
            [ 8661.928771] device-mapper: multipath: Failing path 8:112.
            [ 8666.770534] device-mapper: multipath: Reinstating path 8:112.
            [ 8666.777310] device-mapper: multipath: Failing path 8:112.
            [ 8671.777942] device-mapper: multipath: Reinstating path 8:112.
            [ 8671.784653] device-mapper: multipath: Failing path 8:112.
            [ 8676.785696] device-mapper: multipath: Reinstating path 8:112.
            [ 8676.792416] device-mapper: multipath: Failing path 8:112.
            [ 8681.794339] device-mapper: multipath: Reinstating path 8:112.
            [ 8681.801053] device-mapper: multipath: Failing path 8:112.
            [ 8686.802318] device-mapper: multipath: Reinstating path 8:112.
            [ 8686.809107] device-mapper: multipath: Failing path 8:112.
            [ 8691.809802] device-mapper: multipath: Reinstating path 8:112.
            [ 8691.816548] device-mapper: multipath: Failing path 8:112.
            [ 8696.817530] device-mapper: multipath: Reinstating path 8:112.
            [ 8696.824281] device-mapper: multipath: Failing path 8:112.
            [ 8701.826151] device-mapper: multipath: Reinstating path 8:112.
            [ 8701.832916] device-mapper: multipath: Failing path 8:112.
            [ 8706.834134] device-mapper: multipath: Reinstating path 8:112.
            ...
            [ 9039.375623] device-mapper: multipath: Reinstating path 8:112.
            [ 9039.383620] device-mapper: multipath: Failing path 8:112.
            [ 9044.386271] device-mapper: multipath: Reinstating path 8:112.
            [ 9044.394213] device-mapper: multipath: Failing path 8:112.
            [ 9049.394863] device-mapper: multipath: Reinstating path 8:112.
            [ 9049.402805] sd 0:0:0:44: rdac: array soak-netapp5660-1, ctlr 0, queueing MODE_SELECT command
            [ 9050.032813] sd 0:0:0:44: rdac: array soak-netapp5660-1, ctlr 0, MODE_SELECT completed
            
            [root@soak-10 ~]# multipath -ll
            Aug 13 21:51:49 | ignoring extra data starting with 'udev_dir' on line 2 of /etc/multipath.conf
            Aug 13 21:51:49 | /etc/multipath.conf line 6, invalid keyword: getuid_callout
            Aug 13 21:51:49 | /etc/multipath.conf line 29, invalid keyword: getuid_callout
            ST500NM0011_Z1M0X1GG dm-5 ATA     ,ST500NM0011     
            size=466G features='0' hwhandler='0' wp=rw
            `-+- policy='round-robin 0' prio=1 status=active
              `- 7:0:1:0  sdb 8:16  active ready running
            360080e50001ff0d00000017c52012921 dm-4 LSI     ,INF-01-00       
            size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw
            `-+- policy='round-robin 0' prio=6 status=active
              `- 0:0:0:43 sdg 8:96  active ready running
            360080e50001fedb80000015752012949 dm-1 LSI     ,INF-01-00       
            size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw
            `-+- policy='round-robin 0' prio=1 status=active
              `- 0:0:0:44 sdh 8:112 failed ghost running
            360080e50001ff0d00000018052012939 dm-6 LSI     ,INF-01-00       
            size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw
            `-+- policy='round-robin 0' prio=6 status=active
              `- 0:0:0:45 sdi 8:128 active ready running
            360080e50001fedb80000015952012962 dm-2 LSI     ,INF-01-00       
            size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw
            `-+- policy='round-robin 0' prio=1 status=active
              `- 0:0:0:46 sdd 8:48  active ghost running
            360080e50001fedb80000015552012932 dm-0 LSI     ,INF-01-00       
            size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw
            `-+- policy='round-robin 0' prio=1 status=active
              `- 0:0:0:42 sdf 8:80  active ghost running
            360080e50001ff0d00000017852012908 dm-3 LSI     ,INF-01-00       
            size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw
            `-+- policy='round-robin 0' prio=6 status=active
              `- 0:0:0:41 sde 8:64  active ready running
            
            sarah Sarah Liu added a comment - After reboot all servers, when mount the system, the problematic MDS(soak-10) hung at mount for 5 minutes, then it seemed pass. In normal case it takes secs. soak-10: [ 41.369390] NFS: Registering the id_resolver key type [ 41.375705] Key type id_resolver registered [ 41.381041] Key type id_legacy registered [ 61.439610] IPv6: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready [ 7664.773217] LNet: HW NUMA nodes: 2, HW CPU cores: 32, npartitions: 2 [ 7664.784088] alg: No test for adler32 (adler32-zlib) [ 7665.685582] Lustre: Lustre: Build Version: 2.10.5_RC1_1_g574e63f [ 7665.990084] LNet: Added LNI 192.168.1.110@o2ib [8/256/0/180] [ 8661.888556] sd 0:0:0:44: [sdh] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 8661.897398] sd 0:0:0:44: [sdh] Sense Key : Illegal Request [current] [ 8661.904672] sd 0:0:0:44: [sdh] <<vendor>>ASC=0x94 ASCQ=0x1 [ 8661.910955] sd 0:0:0:44: [sdh] CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 [ 8661.920451] blk_update_request: I/O error, dev sdh, sector 0 [ 8661.928771] device-mapper: multipath: Failing path 8:112. [ 8666.770534] device-mapper: multipath: Reinstating path 8:112. [ 8666.777310] device-mapper: multipath: Failing path 8:112. [ 8671.777942] device-mapper: multipath: Reinstating path 8:112. [ 8671.784653] device-mapper: multipath: Failing path 8:112. [ 8676.785696] device-mapper: multipath: Reinstating path 8:112. [ 8676.792416] device-mapper: multipath: Failing path 8:112. [ 8681.794339] device-mapper: multipath: Reinstating path 8:112. [ 8681.801053] device-mapper: multipath: Failing path 8:112. [ 8686.802318] device-mapper: multipath: Reinstating path 8:112. [ 8686.809107] device-mapper: multipath: Failing path 8:112. [ 8691.809802] device-mapper: multipath: Reinstating path 8:112. [ 8691.816548] device-mapper: multipath: Failing path 8:112. [ 8696.817530] device-mapper: multipath: Reinstating path 8:112. [ 8696.824281] device-mapper: multipath: Failing path 8:112. [ 8701.826151] device-mapper: multipath: Reinstating path 8:112. [ 8701.832916] device-mapper: multipath: Failing path 8:112. [ 8706.834134] device-mapper: multipath: Reinstating path 8:112. ... [ 9039.375623] device-mapper: multipath: Reinstating path 8:112. [ 9039.383620] device-mapper: multipath: Failing path 8:112. [ 9044.386271] device-mapper: multipath: Reinstating path 8:112. [ 9044.394213] device-mapper: multipath: Failing path 8:112. [ 9049.394863] device-mapper: multipath: Reinstating path 8:112. [ 9049.402805] sd 0:0:0:44: rdac: array soak-netapp5660-1, ctlr 0, queueing MODE_SELECT command [ 9050.032813] sd 0:0:0:44: rdac: array soak-netapp5660-1, ctlr 0, MODE_SELECT completed [root@soak-10 ~]# multipath -ll Aug 13 21:51:49 | ignoring extra data starting with 'udev_dir' on line 2 of /etc/multipath.conf Aug 13 21:51:49 | /etc/multipath.conf line 6, invalid keyword: getuid_callout Aug 13 21:51:49 | /etc/multipath.conf line 29, invalid keyword: getuid_callout ST500NM0011_Z1M0X1GG dm-5 ATA ,ST500NM0011 size=466G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 7:0:1:0 sdb 8:16 active ready running 360080e50001ff0d00000017c52012921 dm-4 LSI ,INF-01-00 size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=6 status=active `- 0:0:0:43 sdg 8:96 active ready running 360080e50001fedb80000015752012949 dm-1 LSI ,INF-01-00 size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 0:0:0:44 sdh 8:112 failed ghost running 360080e50001ff0d00000018052012939 dm-6 LSI ,INF-01-00 size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=6 status=active `- 0:0:0:45 sdi 8:128 active ready running 360080e50001fedb80000015952012962 dm-2 LSI ,INF-01-00 size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 0:0:0:46 sdd 8:48 active ghost running 360080e50001fedb80000015552012932 dm-0 LSI ,INF-01-00 size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=1 status=active `- 0:0:0:42 sdf 8:80 active ghost running 360080e50001ff0d00000017852012908 dm-3 LSI ,INF-01-00 size=15T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 rdac' wp=rw `-+- policy='round-robin 0' prio=6 status=active `- 0:0:0:41 sde 8:64 active ready running

            People

              wc-triage WC Triage
              sarah Sarah Liu
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: