Details

    • 3
    • 9223372036854775807

    Description

      Error occurred during soak testing of build '20160302' (b2_8 RC4) (see: https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160302 also). DNE is enabled. MDTs had been formatted using ldiskfs, OSTs using zfs. MDS nodes are configured in active - active HA failover configuration. (For teset set-up configuration see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-Configuration)

      The following effects can be observed:

      • After restarting and failover it takes 0.5 - 3 hours for the recovery to complete on all MDSes(seems to be correlated wiith uptime of the MDS)
      • Sometimes only 1 MDT finish recovery
      • Often the recovery never completes
      • This is true for all MDSes
      • a high rate of clients are evicted leading to a large number of job crashes ( up to ~ 25%).
      • Interestingly the recovery of secondary MDTs take only a couple of minutes and always complete on the failover partner node.

      Here're failover and restart events listed for MDS node lola-11. The same 'structure' can be found for the other nodes:
      Recovery for secondary MDTs on lola-11

      mds_failover     : 2016-03-03 10:24:12,345 - 2016-03-03 10:32:12,647    lola-10
      Mar  3 10:31:58 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 2:14, of 16 clients 0 recovered and 16 were evicted.
      Mar  3 10:32:06 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:20, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-03 18:11:42,958 - 2016-03-03 18:18:17,112    lola-10
      Mar  3 18:18:03 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:03, of 16 clients 0 recovered and 16 were evicted.
      Mar  3 18:18:10 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:08, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-03 22:04:51,554 - 2016-03-03 22:12:03,652    lola-10
      Mar  3 22:11:50 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:36, of 16 clients 0 recovered and 16 were evicted.
      Mar  3 22:11:57 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:22, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-04 00:11:27,161 - 2016-03-04 00:18:36,686    lola-10
      Mar  4 00:18:23 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:23, of 5 clients 0 recovered and 5 were evicted.
      Mar  4 00:18:30 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:23, of 6 clients 0 recovered and 6 were evicted.
      
      mds_failover     : 2016-03-04 01:51:11,775 - 2016-03-04 01:58:40,927    lola-10
      Mar  4 01:58:27 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:41, of 16 clients 0 recovered and 16 were evicted.
      Mar  4 01:58:34 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:23, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-04 02:54:18,928 - 2016-03-04 03:01:00,519    lola-10
      Mar  4 03:00:47 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:05, of 16 clients 0 recovered and 16 were evicted.
      Mar  4 03:00:54 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:09, of 16 clients 0 recovered and 16 were evicted.
      

      ------------------
      Recovery for primary MDTs on lola-11

      mds_failover     : 2016-03-03 09:36:44,457 - 2016-03-03 09:43:43,316    lola-11
      Mar  3 09:50:42 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 6:59, of 16 clients 16 recovered and 0 were evicted.
      Mar  3 09:51:14 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 7:31, of 16 clients 8 recovered and 8 were evicted.
      
      mds_failover     : 2016-03-03 13:06:05,210 - 2016-03-03 13:13:33,003    lola-11
      Mar  3 14:13:46 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 40:56, of 16 clients 16 recovered and 0 were evicted.
      Mar  3 14:13:50 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 41:50, of 16 clients 16 recovered and 0 were evicted.
      
      mds_restart      : 2016-03-03 13:26:05,005 - 2016-03-03 13:32:48,359    lola-11
      Mar  3 14:13:46 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 40:56, of 16 clients 16 recovered and 0 were evicted.
      Mar  3 14:13:50 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 41:50, of 16 clients 16 recovered and 0 were evicted.
      
      mds_restart      : 2016-03-03 20:14:23,309 - 2016-03-03 20:24:56,044    lola-11
      Mar  3 20:37:51 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 12:50, of 16 clients 16 recovered and 0 were evicted.
       ---> MDT0007 never recovered
      
      mds_failover     : 2016-03-03 22:15:27,654 - 2016-03-03 22:23:34,982    lola-11
      Mar  4 01:03:03 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 159:29, of 16 clients 14 recovered and 2 were evicted.
      Mar  4 01:03:05 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 159:30, of 16 clients 14 recovered and 2 were evicted.
      
      mds_failover     : 2016-03-04 05:10:37,638 - 2016-03-04 05:17:48,193    lola-11
       ---> MDT0006 never recovered
       ---> MDT0007 never recovered
      
      mds_failover     : 2016-03-04 05:35:12,194 - 2016-03-04 05:41:56,320    lola-11
       ---> MDT0006 never recovered
       ---> MDT0007 never recovered
      
      mds_restart      : 2016-03-04 06:53:30,098 - 2016-03-04 07:03:06,783    lola-11
       ---> MDT0006 never recovered
       ---> MDT0007 never recovered
      

      Attached message, console and debug log files (with mask '1') of all MDS nodes (lola[8-11].

      Same situation ended once with start of oom-killer (see LU-7836.)

      Attachments

        1. console-lola-10.log.bz2
          506 kB
        2. console-lola-10-log-20160407.bz2
          53 kB
        3. console-lola-11.log.bz2
          563 kB
        4. console-lola-11-log-20160407.bz2
          84 kB
        5. console-lola-8.log.bz2
          723 kB
        6. console-lola-8-log-20160407.bz2
          139 kB
        7. console-lola-9.log.bz2
          650 kB
        8. lola-10_lustre-log.20160414-0312.bz2
          3.18 MB
        9. lola-10-lustre-log-20160304-0751.bz2
          2.14 MB
        10. lola-11_lustre-log.20160414-0312.bz2
          2.22 MB
        11. lola-11-lustre-log-20160304-0751.bz2
          1.44 MB
        12. lola-8_lustre-log.20160414-0312.bz2
          3.66 MB
        13. lola-8-lustre-log-20160304-0751.bz2
          2.58 MB
        14. lola-9-lustre-log-20160304-0751.bz2
          2.13 MB
        15. lustre-log-20160318-0240.bz2
          1.02 MB
        16. messages-lola-10.log.bz2
          370 kB
        17. messages-lola-10.log-20160414.bz2
          84 kB
        18. messages-lola-11.log.bz2
          291 kB
        19. messages-lola-11.log-20160414.bz2
          90 kB
        20. messages-lola-8.log.bz2
          324 kB
        21. messages-lola-8.log-20160414.bz2
          111 kB
        22. messages-lola-9.log.bz2
          367 kB
        23. recovery-times-20160317
          18 kB

        Issue Links

          Activity

            [LU-7848] Recovery process on MDS stalled

            Soak has been continued to execute b2_8 RC5 build with reformatted Lustre FS.
            Now there's only 1 MDT per MDS and 5 OSTs per OSS (unchanged). MDT had
            been formatted with ldiskfs and OSTs using zfs.

            The recovery process never stalls now neither for MDS restarts nor failover. All
            recovery times are below 2 mins now. See the attached file recovery-times-201603-17.

            heckes Frank Heckes (Inactive) added a comment - Soak has been continued to execute b2_8 RC5 build with reformatted Lustre FS. Now there's only 1 MDT per MDS and 5 OSTs per OSS (unchanged). MDT had been formatted with ldiskfs and OSTs using zfs. The recovery process never stalls now neither for MDS restarts nor failover. All recovery times are below 2 mins now. See the attached file recovery-times-201603-17 .
            di.wang Di Wang added a comment -

            Ah, recovery process on the remote target is stuck in lu_object_find().

            mdt_out03_021 D 000000000000000e     0 121458      2 0x00000080
             ffff88081be7bb50 0000000000000046 0000000000000000 0000000000000000
             ffff880378ae49c0 ffff88081ec00118 0000cf356d708513 ffff88081ec00118
             ffff88081be7bb50 000000010d91895e ffff8808161145f8 ffff88081be7bfd8
            Call Trace:
             [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass]
             [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc]
             [<ffffffff81064c00>] ? default_wake_function+0x0/0x20
             [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc]
             [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass]
             [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc]
             [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90
             [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50
             [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc]
             [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc]
             [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc]
             [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0
             [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
             [<ffffffff8109e78e>] kthread+0x9e/0xc0
             [<ffffffff8100c28a>] child_rip+0xa/0x20
             [<ffffffff8109e6f0>] ? kthread+0x0/0xc0
             [<ffffffff8100c280>] ? child_rip+0x0/0x20
            mdt_out03_022 D 000000000000001c     0 121824      2 0x00000080
             ffff88081c4afb50 0000000000000046 0000000000000000 0000000000000000
             ffff880378ae49c0 ffff8807e4900118 0000d18fc8bbd702 ffff8807e4900118
             ffff88081c4afb50 000000010db905ef ffff8807e0153ad8 ffff88081c4affd8
            Call Trace:
             [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass]
             [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc]
             [<ffffffff81064c00>] ? default_wake_function+0x0/0x20
             [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc]
             [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass]
             [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc]
             [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90
             [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50
             [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc]
             [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc]
             [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc]
             [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0
             [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
             [<ffffffff8109e78e>] kthread+0x9e/0xc0
             [<ffffffff8100c28a>] child_rip+0xa/0x20
             [<ffffffff8109e6f0>] ? kthread+0x0/0xc0
             [<ffffffff8100c280>] ? child_rip+0x0/0x20
            
            di.wang Di Wang added a comment - Ah, recovery process on the remote target is stuck in lu_object_find(). mdt_out03_021 D 000000000000000e 0 121458 2 0x00000080 ffff88081be7bb50 0000000000000046 0000000000000000 0000000000000000 ffff880378ae49c0 ffff88081ec00118 0000cf356d708513 ffff88081ec00118 ffff88081be7bb50 000000010d91895e ffff8808161145f8 ffff88081be7bfd8 Call Trace: [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass] [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc] [<ffffffff81064c00>] ? default_wake_function+0x0/0x20 [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc] [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass] [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc] [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90 [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50 [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc] [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc] [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc] [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0 [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc] [<ffffffff8109e78e>] kthread+0x9e/0xc0 [<ffffffff8100c28a>] child_rip+0xa/0x20 [<ffffffff8109e6f0>] ? kthread+0x0/0xc0 [<ffffffff8100c280>] ? child_rip+0x0/0x20 mdt_out03_022 D 000000000000001c 0 121824 2 0x00000080 ffff88081c4afb50 0000000000000046 0000000000000000 0000000000000000 ffff880378ae49c0 ffff8807e4900118 0000d18fc8bbd702 ffff8807e4900118 ffff88081c4afb50 000000010db905ef ffff8807e0153ad8 ffff88081c4affd8 Call Trace: [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass] [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc] [<ffffffff81064c00>] ? default_wake_function+0x0/0x20 [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc] [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass] [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc] [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90 [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50 [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc] [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc] [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc] [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0 [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc] [<ffffffff8109e78e>] kthread+0x9e/0xc0 [<ffffffff8100c28a>] child_rip+0xa/0x20 [<ffffffff8109e6f0>] ? kthread+0x0/0xc0 [<ffffffff8100c280>] ? child_rip+0x0/0x20

            The problem also occurs during soak testing of b2_8 RC5 (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160309).

            • MDS (lola-11 was restarted at random time 2016-03-14 06:09:10,71
            • Recovery of MDT0007 don't complete even after more than 2 hours
            • Uploaded messages, console and debug log file: {{messages-lola-11-20160314, console-lola-11-20160314, lola-16-lustre-log-20160314-0736)
            heckes Frank Heckes (Inactive) added a comment - The problem also occurs during soak testing of b2_8 RC5 (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160309 ). MDS ( lola-11 was restarted at random time 2016-03-14 06:09:10,71 Recovery of MDT0007 don't complete even after more than 2 hours Uploaded messages, console and debug log file: {{messages-lola-11-20160314, console-lola-11-20160314, lola-16-lustre-log-20160314-0736)

            All files (messages, debug logs) have been copied to lhn.lola.hpdd.intel.com:/scratch/crashdumps/lu-7848. The test session were executed between Mar, 9th 02:52 – 06:19.

            heckes Frank Heckes (Inactive) added a comment - All files (messages, debug logs) have been copied to lhn.lola.hpdd.intel.com:/scratch/crashdumps/lu-7848 . The test session were executed between Mar, 9th 02:52 – 06:19.
            heckes Frank Heckes (Inactive) added a comment - - edited

            The effect still remains:

            mds_restart      : 2016-03-09 03:44:21,690 - 2016-03-09 03:55:17,693    lola-8
            lola-8.log:Mar  9 03:56:08 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 0:49, of 16 clients 16 recovered and 0 were evicted.
            lola-8.log:Mar  9 03:56:09 lola-8 kernel: Lustre: soaked-MDT0001: Recovery over after 2:29, of 16 clients 16 recovered and 0 were evicted.
            
            mds_restart      : 2016-03-09 04:54:33,695 - 2016-03-09 05:02:29,259    lola-10
            lola-10.log:Mar  9 05:16:28 lola-10 kernel: Lustre: soaked-MDT0007: Recovery over after 1:42, of 13 clients 0 recovered and 13 were evicted.
            lola-10.log:Mar  9 05:16:44 lola-10 kernel: Lustre: soaked-MDT0006: Recovery over after 0:32, of 7 clients 0 recovered and 7 were evicted.
            
            mds_failover     : 2016-03-09 05:08:46,261 - 2016-03-09 05:16:50,083    lola-11
            lola-11.log:Mar  9 05:47:15 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 30:24, of 10 clients 10 recovered and 0 were evicted.
            lola-11.log:Mar  9 05:47:23 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 30:33, of 13 clients 12 recovered and 1 was evicted.
            

            This is for build https://build.hpdd.intel.com/job/lustre-reviews/37662/
            I'll add message and debug log files asap.

            heckes Frank Heckes (Inactive) added a comment - - edited The effect still remains: mds_restart : 2016-03-09 03:44:21,690 - 2016-03-09 03:55:17,693 lola-8 lola-8.log:Mar 9 03:56:08 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 0:49, of 16 clients 16 recovered and 0 were evicted. lola-8.log:Mar 9 03:56:09 lola-8 kernel: Lustre: soaked-MDT0001: Recovery over after 2:29, of 16 clients 16 recovered and 0 were evicted. mds_restart : 2016-03-09 04:54:33,695 - 2016-03-09 05:02:29,259 lola-10 lola-10.log:Mar 9 05:16:28 lola-10 kernel: Lustre: soaked-MDT0007: Recovery over after 1:42, of 13 clients 0 recovered and 13 were evicted. lola-10.log:Mar 9 05:16:44 lola-10 kernel: Lustre: soaked-MDT0006: Recovery over after 0:32, of 7 clients 0 recovered and 7 were evicted. mds_failover : 2016-03-09 05:08:46,261 - 2016-03-09 05:16:50,083 lola-11 lola-11.log:Mar 9 05:47:15 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 30:24, of 10 clients 10 recovered and 0 were evicted. lola-11.log:Mar 9 05:47:23 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 30:33, of 13 clients 12 recovered and 1 was evicted. This is for build https://build.hpdd.intel.com/job/lustre-reviews/37662/ I'll add message and debug log files asap.
            • The MDS nodes weren't updated with the correct builds. I don't know the reason as I didn't execute the update myself. Anyway, I'm very sorry for that.
            • All nodes (MDSes) have been updated and soak session is restarted using build #37662
            heckes Frank Heckes (Inactive) added a comment - The MDS nodes weren't updated with the correct builds. I don't know the reason as I didn't execute the update myself. Anyway, I'm very sorry for that. All nodes (MDSes) have been updated and soak session is restarted using build #37662
            di.wang Di Wang added a comment -

            [3/8/16, 8:05:50 PM] wangdi: it seems the old code is running
            [3/8/16, 8:05:53 PM] wangdi: 00010000:00000001:13.0:1457452176.471578:0:4691:0:(ldlm_lib.c:756:target_handle_reconnect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e)
            00010000:02000400:13.0:1457452176.471579:0:4691:0:(ldlm_lib.c:1175:target_handle_connect()) soaked-MDT0003: Client soaked-MDT0006-mdtlov_UUID (at 192.168.1.111@o2ib10) refused connection, still busy with 474 references
            [3/8/16, 8:07:39 PM] wangdi: The debug line number seems not matching the patch http://review.whamcloud.com/#/c/18813/2/
            [3/8/16, 8:07:46 PM] wangdi: it seems it matches b2_8
            [3/8/16, 8:08:43 PM] wangdi: Cliff, Frank: could you please check what is running on lola-8. I hope to use this build https://build.hpdd.intel.com/job/lustre-reviews/37662/
            [3/8/16, 8:08:44 PM] wangdi: thanks

            di.wang Di Wang added a comment - [3/8/16, 8:05:50 PM] wangdi: it seems the old code is running [3/8/16, 8:05:53 PM] wangdi: 00010000:00000001:13.0:1457452176.471578:0:4691:0:(ldlm_lib.c:756:target_handle_reconnect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e) 00010000:02000400:13.0:1457452176.471579:0:4691:0:(ldlm_lib.c:1175:target_handle_connect()) soaked-MDT0003: Client soaked-MDT0006-mdtlov_UUID (at 192.168.1.111@o2ib10) refused connection, still busy with 474 references [3/8/16, 8:07:39 PM] wangdi: The debug line number seems not matching the patch http://review.whamcloud.com/#/c/18813/2/ [3/8/16, 8:07:46 PM] wangdi: it seems it matches b2_8 [3/8/16, 8:08:43 PM] wangdi: Cliff, Frank: could you please check what is running on lola-8. I hope to use this build https://build.hpdd.intel.com/job/lustre-reviews/37662/ [3/8/16, 8:08:44 PM] wangdi: thanks

            The problem persist after installing build associated with change #18813.
            Still there long recovery times with most all clients recovered or short ones were no client recovers:

            lola-10.log:Mar  8 02:28:45 lola-10 kernel: Lustre: soaked-MDT0005: Recovery over after 183:52, of 16 clients 16 recovered and 0 were evicted.
            lola-10.log:Mar  8 02:28:46 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 183:04, of 16 clients 16 recovered and 0 were evicted.
            lola-10.log:Mar  8 05:01:17 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 14:17, of 16 clients 13 recovered and 3 were evicted.
            lola-11.log:Mar  8 02:33:20 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 112:56, of 16 clients 16 recovered and 0 were evicted.
            lola-11.log:Mar  8 02:33:21 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 113:44, of 16 clients 16 recovered and 0 were evicted.
            lola-8.log:Mar  8 02:27:59 lola-8 kernel: Lustre: soaked-MDT0003: Recovery over after 0:45, of 16 clients 0 recovered and 16 were evicted.
            lola-8.log:Mar  8 02:28:10 lola-8 kernel: Lustre: soaked-MDT0002: Recovery over after 0:26, of 16 clients 0 recovered and 16 were evicted.
            lola-8.log:Mar  8 02:37:39 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 63:43, of 16 clients 16 recovered and 0 were evicted.
            lola-9.log:Mar  8 01:32:31 lola-9 kernel: Lustre: soaked-MDT0001: Recovery over after 1:03, of 16 clients 0 recovered and 16 were evicted.
            lola-9.log:Mar  8 01:33:23 lola-9 kernel: Lustre: soaked-MDT0000: Recovery over after 1:10, of 16 clients 0 recovered and 16 were evicted.
            lola-9.log:Mar  8 02:41:56 lola-9 kernel: Lustre: soaked-MDT0003: Recovery over after 13:41, of 16 clients 16 recovered and 0 were evicted.
            lola-9.log:Mar  8 02:41:58 lola-9 kernel: Lustre: soaked-MDT0002: Recovery over after 13:43, of 16 clients 16 recovered and 0 were evicted.
            
            heckes Frank Heckes (Inactive) added a comment - The problem persist after installing build associated with change #18813. Still there long recovery times with most all clients recovered or short ones were no client recovers: lola-10.log:Mar 8 02:28:45 lola-10 kernel: Lustre: soaked-MDT0005: Recovery over after 183:52, of 16 clients 16 recovered and 0 were evicted. lola-10.log:Mar 8 02:28:46 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 183:04, of 16 clients 16 recovered and 0 were evicted. lola-10.log:Mar 8 05:01:17 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 14:17, of 16 clients 13 recovered and 3 were evicted. lola-11.log:Mar 8 02:33:20 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 112:56, of 16 clients 16 recovered and 0 were evicted. lola-11.log:Mar 8 02:33:21 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 113:44, of 16 clients 16 recovered and 0 were evicted. lola-8.log:Mar 8 02:27:59 lola-8 kernel: Lustre: soaked-MDT0003: Recovery over after 0:45, of 16 clients 0 recovered and 16 were evicted. lola-8.log:Mar 8 02:28:10 lola-8 kernel: Lustre: soaked-MDT0002: Recovery over after 0:26, of 16 clients 0 recovered and 16 were evicted. lola-8.log:Mar 8 02:37:39 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 63:43, of 16 clients 16 recovered and 0 were evicted. lola-9.log:Mar 8 01:32:31 lola-9 kernel: Lustre: soaked-MDT0001: Recovery over after 1:03, of 16 clients 0 recovered and 16 were evicted. lola-9.log:Mar 8 01:33:23 lola-9 kernel: Lustre: soaked-MDT0000: Recovery over after 1:10, of 16 clients 0 recovered and 16 were evicted. lola-9.log:Mar 8 02:41:56 lola-9 kernel: Lustre: soaked-MDT0003: Recovery over after 13:41, of 16 clients 16 recovered and 0 were evicted. lola-9.log:Mar 8 02:41:58 lola-9 kernel: Lustre: soaked-MDT0002: Recovery over after 13:43, of 16 clients 16 recovered and 0 were evicted.

            wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/18813
            Subject: LU-7848 target: Do not fail MDT-MDT connection
            Project: fs/lustre-release
            Branch: b2_8
            Current Patch Set: 1
            Commit: 8b42a380c37ac2ae4a24431f42984acf88b2a546

            gerrit Gerrit Updater added a comment - wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/18813 Subject: LU-7848 target: Do not fail MDT-MDT connection Project: fs/lustre-release Branch: b2_8 Current Patch Set: 1 Commit: 8b42a380c37ac2ae4a24431f42984acf88b2a546

            wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/18800
            Subject: LU-7848 target: Do not fail MDT-MDT connection
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 190f34e30d3aa1002fd4cbe71e48732a296c65b7

            gerrit Gerrit Updater added a comment - wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/18800 Subject: LU-7848 target: Do not fail MDT-MDT connection Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 190f34e30d3aa1002fd4cbe71e48732a296c65b7
            di.wang Di Wang added a comment -

            ah, it seems after we disable the MDT-MDT eviction by the recent patch http://review.whamcloud.com/#/c/18676/, then the new connection from the restart MDT will always be denied because of this. So the retrieving update log will be blocked. I will cook a patch.

            00010000:02000400:27.0:1457125392.284909:0:6144:0:(ldlm_lib.c:751:target_handle_reconnect()) soaked-MDT0004: already connected client soaked-MDT0003-mdtlov_UUID (at 192.168.1.109@o2ib10) with handle 0xb3e905baff54ad2d. Rejecting client with the same UUID trying to reconnect with handle 0x91819165c59714eb
            00010000:00000001:27.0:1457125392.284913:0:6144:0:(ldlm_lib.c:756:target_handle_reconnect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e)
            00010000:00000001:27.0:1457125392.284916:0:6144:0:(ldlm_lib.c:1190:target_handle_connect()) Process leaving via out (rc=18446744073709551502 : -114 : 0xffffffffffffff8e)
            00000020:00000040:27.0:1457125392.284920:0:6144:0:(genops.c:817:class_export_put()) PUTting export ffff8807f3ca9c00 : new refcount 5
            00000020:00000040:27.0:1457125392.284922:0:6144:0:(obd_config.c:740:class_decref()) Decref soaked-MDT0004 (ffff880834f44078) now 67
            00010000:00000001:27.0:1457125392.284924:0:6144:0:(ldlm_lib.c:1410:target_handle_connect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e)
            
            di.wang Di Wang added a comment - ah, it seems after we disable the MDT-MDT eviction by the recent patch http://review.whamcloud.com/#/c/18676/ , then the new connection from the restart MDT will always be denied because of this. So the retrieving update log will be blocked. I will cook a patch. 00010000:02000400:27.0:1457125392.284909:0:6144:0:(ldlm_lib.c:751:target_handle_reconnect()) soaked-MDT0004: already connected client soaked-MDT0003-mdtlov_UUID (at 192.168.1.109@o2ib10) with handle 0xb3e905baff54ad2d. Rejecting client with the same UUID trying to reconnect with handle 0x91819165c59714eb 00010000:00000001:27.0:1457125392.284913:0:6144:0:(ldlm_lib.c:756:target_handle_reconnect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e) 00010000:00000001:27.0:1457125392.284916:0:6144:0:(ldlm_lib.c:1190:target_handle_connect()) Process leaving via out (rc=18446744073709551502 : -114 : 0xffffffffffffff8e) 00000020:00000040:27.0:1457125392.284920:0:6144:0:(genops.c:817:class_export_put()) PUTting export ffff8807f3ca9c00 : new refcount 5 00000020:00000040:27.0:1457125392.284922:0:6144:0:(obd_config.c:740:class_decref()) Decref soaked-MDT0004 (ffff880834f44078) now 67 00010000:00000001:27.0:1457125392.284924:0:6144:0:(ldlm_lib.c:1410:target_handle_connect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e)

            People

              di.wang Di Wang
              heckes Frank Heckes (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: