Details

    • 3
    • 9223372036854775807

    Description

      Error occurred during soak testing of build '20160302' (b2_8 RC4) (see: https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160302 also). DNE is enabled. MDTs had been formatted using ldiskfs, OSTs using zfs. MDS nodes are configured in active - active HA failover configuration. (For teset set-up configuration see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-Configuration)

      The following effects can be observed:

      • After restarting and failover it takes 0.5 - 3 hours for the recovery to complete on all MDSes(seems to be correlated wiith uptime of the MDS)
      • Sometimes only 1 MDT finish recovery
      • Often the recovery never completes
      • This is true for all MDSes
      • a high rate of clients are evicted leading to a large number of job crashes ( up to ~ 25%).
      • Interestingly the recovery of secondary MDTs take only a couple of minutes and always complete on the failover partner node.

      Here're failover and restart events listed for MDS node lola-11. The same 'structure' can be found for the other nodes:
      Recovery for secondary MDTs on lola-11

      mds_failover     : 2016-03-03 10:24:12,345 - 2016-03-03 10:32:12,647    lola-10
      Mar  3 10:31:58 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 2:14, of 16 clients 0 recovered and 16 were evicted.
      Mar  3 10:32:06 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:20, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-03 18:11:42,958 - 2016-03-03 18:18:17,112    lola-10
      Mar  3 18:18:03 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:03, of 16 clients 0 recovered and 16 were evicted.
      Mar  3 18:18:10 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:08, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-03 22:04:51,554 - 2016-03-03 22:12:03,652    lola-10
      Mar  3 22:11:50 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:36, of 16 clients 0 recovered and 16 were evicted.
      Mar  3 22:11:57 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:22, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-04 00:11:27,161 - 2016-03-04 00:18:36,686    lola-10
      Mar  4 00:18:23 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:23, of 5 clients 0 recovered and 5 were evicted.
      Mar  4 00:18:30 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:23, of 6 clients 0 recovered and 6 were evicted.
      
      mds_failover     : 2016-03-04 01:51:11,775 - 2016-03-04 01:58:40,927    lola-10
      Mar  4 01:58:27 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:41, of 16 clients 0 recovered and 16 were evicted.
      Mar  4 01:58:34 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:23, of 16 clients 0 recovered and 16 were evicted.
      
      mds_failover     : 2016-03-04 02:54:18,928 - 2016-03-04 03:01:00,519    lola-10
      Mar  4 03:00:47 lola-11 kernel: Lustre: soaked-MDT0005: Recovery over after 1:05, of 16 clients 0 recovered and 16 were evicted.
      Mar  4 03:00:54 lola-11 kernel: Lustre: soaked-MDT0004: Recovery over after 0:09, of 16 clients 0 recovered and 16 were evicted.
      

      ------------------
      Recovery for primary MDTs on lola-11

      mds_failover     : 2016-03-03 09:36:44,457 - 2016-03-03 09:43:43,316    lola-11
      Mar  3 09:50:42 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 6:59, of 16 clients 16 recovered and 0 were evicted.
      Mar  3 09:51:14 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 7:31, of 16 clients 8 recovered and 8 were evicted.
      
      mds_failover     : 2016-03-03 13:06:05,210 - 2016-03-03 13:13:33,003    lola-11
      Mar  3 14:13:46 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 40:56, of 16 clients 16 recovered and 0 were evicted.
      Mar  3 14:13:50 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 41:50, of 16 clients 16 recovered and 0 were evicted.
      
      mds_restart      : 2016-03-03 13:26:05,005 - 2016-03-03 13:32:48,359    lola-11
      Mar  3 14:13:46 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 40:56, of 16 clients 16 recovered and 0 were evicted.
      Mar  3 14:13:50 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 41:50, of 16 clients 16 recovered and 0 were evicted.
      
      mds_restart      : 2016-03-03 20:14:23,309 - 2016-03-03 20:24:56,044    lola-11
      Mar  3 20:37:51 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 12:50, of 16 clients 16 recovered and 0 were evicted.
       ---> MDT0007 never recovered
      
      mds_failover     : 2016-03-03 22:15:27,654 - 2016-03-03 22:23:34,982    lola-11
      Mar  4 01:03:03 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 159:29, of 16 clients 14 recovered and 2 were evicted.
      Mar  4 01:03:05 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 159:30, of 16 clients 14 recovered and 2 were evicted.
      
      mds_failover     : 2016-03-04 05:10:37,638 - 2016-03-04 05:17:48,193    lola-11
       ---> MDT0006 never recovered
       ---> MDT0007 never recovered
      
      mds_failover     : 2016-03-04 05:35:12,194 - 2016-03-04 05:41:56,320    lola-11
       ---> MDT0006 never recovered
       ---> MDT0007 never recovered
      
      mds_restart      : 2016-03-04 06:53:30,098 - 2016-03-04 07:03:06,783    lola-11
       ---> MDT0006 never recovered
       ---> MDT0007 never recovered
      

      Attached message, console and debug log files (with mask '1') of all MDS nodes (lola[8-11].

      Same situation ended once with start of oom-killer (see LU-7836.)

      Attachments

        1. console-lola-10.log.bz2
          506 kB
        2. console-lola-10-log-20160407.bz2
          53 kB
        3. console-lola-11.log.bz2
          563 kB
        4. console-lola-11-log-20160407.bz2
          84 kB
        5. console-lola-8.log.bz2
          723 kB
        6. console-lola-8-log-20160407.bz2
          139 kB
        7. console-lola-9.log.bz2
          650 kB
        8. lola-10_lustre-log.20160414-0312.bz2
          3.18 MB
        9. lola-10-lustre-log-20160304-0751.bz2
          2.14 MB
        10. lola-11_lustre-log.20160414-0312.bz2
          2.22 MB
        11. lola-11-lustre-log-20160304-0751.bz2
          1.44 MB
        12. lola-8_lustre-log.20160414-0312.bz2
          3.66 MB
        13. lola-8-lustre-log-20160304-0751.bz2
          2.58 MB
        14. lola-9-lustre-log-20160304-0751.bz2
          2.13 MB
        15. lustre-log-20160318-0240.bz2
          1.02 MB
        16. messages-lola-10.log.bz2
          370 kB
        17. messages-lola-10.log-20160414.bz2
          84 kB
        18. messages-lola-11.log.bz2
          291 kB
        19. messages-lola-11.log-20160414.bz2
          90 kB
        20. messages-lola-8.log.bz2
          324 kB
        21. messages-lola-8.log-20160414.bz2
          111 kB
        22. messages-lola-9.log.bz2
          367 kB
        23. recovery-times-20160317
          18 kB

        Issue Links

          Activity

            [LU-7848] Recovery process on MDS stalled
            di.wang Di Wang added a comment -

            In the newest run, MDT failover seems stuck again. Here is what happened.

            1. lola 9 is restarted, and MDT0001 is failover on lola-8.
            2. Then MDT0001 is re-mounted and tries to connect to MDT0002 on lola-10, but always get EALREADY.

            LustreError: 11-0: soaked-MDT0002-osp-MDT0001: operation mds_connect to node 192.168.1.110@o2ib10 failed: rc = -114
            LustreError: 11-0: soaked-MDT0002-osp-MDT0001: operation mds_connect to node 192.168.1.110@o2ib10 failed: rc = -114
            

            3. And on MDT0002 (lola-10) denies the new connection because of

            Lustre: soaked-MDT0002: Client soaked-MDT0001-mdtlov_UUID seen on new nid 192.168.1.108@o2ib10 when existing nid 192.168.1.109@o2ib10 is already connected
            

            Since MDT always the same uuid to connect other MDT, I think we need remove the old export when it finds same export comes from a different nid.

            di.wang Di Wang added a comment - In the newest run, MDT failover seems stuck again. Here is what happened. 1. lola 9 is restarted, and MDT0001 is failover on lola-8. 2. Then MDT0001 is re-mounted and tries to connect to MDT0002 on lola-10, but always get EALREADY. LustreError: 11-0: soaked-MDT0002-osp-MDT0001: operation mds_connect to node 192.168.1.110@o2ib10 failed: rc = -114 LustreError: 11-0: soaked-MDT0002-osp-MDT0001: operation mds_connect to node 192.168.1.110@o2ib10 failed: rc = -114 3. And on MDT0002 (lola-10) denies the new connection because of Lustre: soaked-MDT0002: Client soaked-MDT0001-mdtlov_UUID seen on new nid 192.168.1.108@o2ib10 when existing nid 192.168.1.109@o2ib10 is already connected Since MDT always the same uuid to connect other MDT, I think we need remove the old export when it finds same export comes from a different nid.

            Same for umount :

            [root@lola-8 ~]# ps aux | grep umount
            root      21524  0.0  0.0 105184   776 pts/0    D+   08:45   0:00 umount /mnt/soaked-mdt0
            [root@lola-8 ~]# cat /proc/21524/wchan 
            target_stop_recovery_thread[root@lola-8 ~]# 
            
            heckes Frank Heckes (Inactive) added a comment - Same for umount : [root@lola-8 ~]# ps aux | grep umount root 21524 0.0 0.0 105184 776 pts/0 D+ 08:45 0:00 umount /mnt/soaked-mdt0 [root@lola-8 ~]# cat /proc/21524/wchan target_stop_recovery_thread[root@lola-8 ~]#
            heckes Frank Heckes (Inactive) added a comment - - edited

            After ~ 73 hours the recovery process stalled again and lead to an continuously increasing allocation of slabs.
            The later effect is handled in LU-7836. NOTE: All kernel debugs, messages and console logs have been attached to LU-7836.

            • Configuration
              All events between 2016-03-17 04:38 — 2016-03-18 02:00 were executed with 'mds_restart', 'mds_failover + wait for recovery'
              (wait for recovery means that the recovery process need to complete on secondary node before failback)
              This is mentioned as the former failback 'mechanism' configured within soak framework was to failback immediately after
              the server target was mounted successfully on the secondary node. The error happens actually after a 'mds_restart'.

            Sequence of events

            • 2016-03-18 00:34:30,557:fsmgmt.fsmgmt:INFO triggering fault mds_restart
            • 2016-03-18 00:40:49,655:fsmgmt.fsmgmt:INFO lola-8 is up!!
            • 2016-03-18 00:42:25,091:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 mounted successfully on lola-8

            MDT stalled after time_remaining is 0:

            ----------------
            lola-8
            ----------------
            mdt.soaked-MDT0000.recovery_status=
            status: RECOVERING
            recovery_start: 1458286947
            time_remaining: 0
            connected_clients: 12/12
            req_replay_clients: 2
            lock_repay_clients: 4
            completed_clients: 8
            evicted_clients: 0
            replayed_requests: 0
            queued_requests: 2
            next_transno: 167504104141
            

            The recovery process can't be interrupted:

            [root@lola-8 ~]# ps aux -L | grep lctl | grep -v grep 
            root       8740   8740  0.0    1  0.0  15268   732 pts/1    D+   02:38   0:00 lctl --device 5 abort_recovery
            [root@lola-8 ~]# cat /proc/8740/wchan 
            target_stop_recovery_thread
            

            For this event the debug log file lola-8-lustre-log-20160318-0240 has been attached.

            heckes Frank Heckes (Inactive) added a comment - - edited After ~ 73 hours the recovery process stalled again and lead to an continuously increasing allocation of slabs. The later effect is handled in LU-7836 . NOTE : All kernel debugs, messages and console logs have been attached to LU-7836 . Configuration All events between 2016-03-17 04:38 — 2016-03-18 02:00 were executed with 'mds_restart', 'mds_failover + wait for recovery' (wait for recovery means that the recovery process need to complete on secondary node before failback) This is mentioned as the former failback 'mechanism' configured within soak framework was to failback immediately after the server target was mounted successfully on the secondary node. The error happens actually after a 'mds_restart'. Sequence of events 2016-03-18 00:34:30,557:fsmgmt.fsmgmt:INFO triggering fault mds_restart 2016-03-18 00:40:49,655:fsmgmt.fsmgmt:INFO lola-8 is up!! 2016-03-18 00:42:25,091:fsmgmt.fsmgmt:INFO ... soaked-MDT0000 mounted successfully on lola-8 MDT stalled after time_remaining is 0: ---------------- lola-8 ---------------- mdt.soaked-MDT0000.recovery_status= status: RECOVERING recovery_start: 1458286947 time_remaining: 0 connected_clients: 12/12 req_replay_clients: 2 lock_repay_clients: 4 completed_clients: 8 evicted_clients: 0 replayed_requests: 0 queued_requests: 2 next_transno: 167504104141 The recovery process can't be interrupted: [root@lola-8 ~]# ps aux -L | grep lctl | grep -v grep root 8740 8740 0.0 1 0.0 15268 732 pts/1 D+ 02:38 0:00 lctl --device 5 abort_recovery [root@lola-8 ~]# cat /proc/8740/wchan target_stop_recovery_thread For this event the debug log file lola-8-lustre-log-20160318-0240 has been attached.

            Soak has been continued to execute b2_8 RC5 build with reformatted Lustre FS.
            Now there's only 1 MDT per MDS and 5 OSTs per OSS (unchanged). MDT had
            been formatted with ldiskfs and OSTs using zfs.

            The recovery process never stalls now neither for MDS restarts nor failover. All
            recovery times are below 2 mins now. See the attached file recovery-times-201603-17.

            heckes Frank Heckes (Inactive) added a comment - Soak has been continued to execute b2_8 RC5 build with reformatted Lustre FS. Now there's only 1 MDT per MDS and 5 OSTs per OSS (unchanged). MDT had been formatted with ldiskfs and OSTs using zfs. The recovery process never stalls now neither for MDS restarts nor failover. All recovery times are below 2 mins now. See the attached file recovery-times-201603-17 .
            di.wang Di Wang added a comment -

            Ah, recovery process on the remote target is stuck in lu_object_find().

            mdt_out03_021 D 000000000000000e     0 121458      2 0x00000080
             ffff88081be7bb50 0000000000000046 0000000000000000 0000000000000000
             ffff880378ae49c0 ffff88081ec00118 0000cf356d708513 ffff88081ec00118
             ffff88081be7bb50 000000010d91895e ffff8808161145f8 ffff88081be7bfd8
            Call Trace:
             [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass]
             [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc]
             [<ffffffff81064c00>] ? default_wake_function+0x0/0x20
             [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc]
             [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass]
             [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc]
             [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90
             [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50
             [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc]
             [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc]
             [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc]
             [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0
             [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
             [<ffffffff8109e78e>] kthread+0x9e/0xc0
             [<ffffffff8100c28a>] child_rip+0xa/0x20
             [<ffffffff8109e6f0>] ? kthread+0x0/0xc0
             [<ffffffff8100c280>] ? child_rip+0x0/0x20
            mdt_out03_022 D 000000000000001c     0 121824      2 0x00000080
             ffff88081c4afb50 0000000000000046 0000000000000000 0000000000000000
             ffff880378ae49c0 ffff8807e4900118 0000d18fc8bbd702 ffff8807e4900118
             ffff88081c4afb50 000000010db905ef ffff8807e0153ad8 ffff88081c4affd8
            Call Trace:
             [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass]
             [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc]
             [<ffffffff81064c00>] ? default_wake_function+0x0/0x20
             [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc]
             [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass]
             [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc]
             [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90
             [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50
             [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc]
             [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc]
             [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc]
             [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0
             [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
             [<ffffffff8109e78e>] kthread+0x9e/0xc0
             [<ffffffff8100c28a>] child_rip+0xa/0x20
             [<ffffffff8109e6f0>] ? kthread+0x0/0xc0
             [<ffffffff8100c280>] ? child_rip+0x0/0x20
            
            di.wang Di Wang added a comment - Ah, recovery process on the remote target is stuck in lu_object_find(). mdt_out03_021 D 000000000000000e 0 121458 2 0x00000080 ffff88081be7bb50 0000000000000046 0000000000000000 0000000000000000 ffff880378ae49c0 ffff88081ec00118 0000cf356d708513 ffff88081ec00118 ffff88081be7bb50 000000010d91895e ffff8808161145f8 ffff88081be7bfd8 Call Trace: [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass] [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc] [<ffffffff81064c00>] ? default_wake_function+0x0/0x20 [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc] [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass] [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc] [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90 [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50 [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc] [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc] [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc] [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0 [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc] [<ffffffff8109e78e>] kthread+0x9e/0xc0 [<ffffffff8100c28a>] child_rip+0xa/0x20 [<ffffffff8109e6f0>] ? kthread+0x0/0xc0 [<ffffffff8100c280>] ? child_rip+0x0/0x20 mdt_out03_022 D 000000000000001c 0 121824 2 0x00000080 ffff88081c4afb50 0000000000000046 0000000000000000 0000000000000000 ffff880378ae49c0 ffff8807e4900118 0000d18fc8bbd702 ffff8807e4900118 ffff88081c4afb50 000000010db905ef ffff8807e0153ad8 ffff88081c4affd8 Call Trace: [<ffffffffa088b71d>] lu_object_find_at+0x3d/0xe0 [obdclass] [<ffffffffa0acd342>] ? __req_capsule_get+0x162/0x6e0 [ptlrpc] [<ffffffff81064c00>] ? default_wake_function+0x0/0x20 [<ffffffffa0aa2dd0>] ? lustre_swab_object_update_reply+0x0/0xc0 [ptlrpc] [<ffffffffa088c52c>] dt_locate_at+0x1c/0xa0 [obdclass] [<ffffffffa0b13970>] out_handle+0x1030/0x1880 [ptlrpc] [<ffffffff8105872d>] ? check_preempt_curr+0x6d/0x90 [<ffffffff8152b83e>] ? mutex_lock+0x1e/0x50 [<ffffffffa0b031ca>] ? req_can_reconstruct+0x6a/0x120 [ptlrpc] [<ffffffffa0b0ac2c>] tgt_request_handle+0x8ec/0x1440 [ptlrpc] [<ffffffffa0ab7c61>] ptlrpc_main+0xd21/0x1800 [ptlrpc] [<ffffffff8152a39e>] ? thread_return+0x4e/0x7d0 [<ffffffffa0ab6f40>] ? ptlrpc_main+0x0/0x1800 [ptlrpc] [<ffffffff8109e78e>] kthread+0x9e/0xc0 [<ffffffff8100c28a>] child_rip+0xa/0x20 [<ffffffff8109e6f0>] ? kthread+0x0/0xc0 [<ffffffff8100c280>] ? child_rip+0x0/0x20

            The problem also occurs during soak testing of b2_8 RC5 (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160309).

            • MDS (lola-11 was restarted at random time 2016-03-14 06:09:10,71
            • Recovery of MDT0007 don't complete even after more than 2 hours
            • Uploaded messages, console and debug log file: {{messages-lola-11-20160314, console-lola-11-20160314, lola-16-lustre-log-20160314-0736)
            heckes Frank Heckes (Inactive) added a comment - The problem also occurs during soak testing of b2_8 RC5 (see https://wiki.hpdd.intel.com/display/Releases/Soak+Testing+on+Lola#SoakTestingonLola-20160309 ). MDS ( lola-11 was restarted at random time 2016-03-14 06:09:10,71 Recovery of MDT0007 don't complete even after more than 2 hours Uploaded messages, console and debug log file: {{messages-lola-11-20160314, console-lola-11-20160314, lola-16-lustre-log-20160314-0736)

            All files (messages, debug logs) have been copied to lhn.lola.hpdd.intel.com:/scratch/crashdumps/lu-7848. The test session were executed between Mar, 9th 02:52 – 06:19.

            heckes Frank Heckes (Inactive) added a comment - All files (messages, debug logs) have been copied to lhn.lola.hpdd.intel.com:/scratch/crashdumps/lu-7848 . The test session were executed between Mar, 9th 02:52 – 06:19.
            heckes Frank Heckes (Inactive) added a comment - - edited

            The effect still remains:

            mds_restart      : 2016-03-09 03:44:21,690 - 2016-03-09 03:55:17,693    lola-8
            lola-8.log:Mar  9 03:56:08 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 0:49, of 16 clients 16 recovered and 0 were evicted.
            lola-8.log:Mar  9 03:56:09 lola-8 kernel: Lustre: soaked-MDT0001: Recovery over after 2:29, of 16 clients 16 recovered and 0 were evicted.
            
            mds_restart      : 2016-03-09 04:54:33,695 - 2016-03-09 05:02:29,259    lola-10
            lola-10.log:Mar  9 05:16:28 lola-10 kernel: Lustre: soaked-MDT0007: Recovery over after 1:42, of 13 clients 0 recovered and 13 were evicted.
            lola-10.log:Mar  9 05:16:44 lola-10 kernel: Lustre: soaked-MDT0006: Recovery over after 0:32, of 7 clients 0 recovered and 7 were evicted.
            
            mds_failover     : 2016-03-09 05:08:46,261 - 2016-03-09 05:16:50,083    lola-11
            lola-11.log:Mar  9 05:47:15 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 30:24, of 10 clients 10 recovered and 0 were evicted.
            lola-11.log:Mar  9 05:47:23 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 30:33, of 13 clients 12 recovered and 1 was evicted.
            

            This is for build https://build.hpdd.intel.com/job/lustre-reviews/37662/
            I'll add message and debug log files asap.

            heckes Frank Heckes (Inactive) added a comment - - edited The effect still remains: mds_restart : 2016-03-09 03:44:21,690 - 2016-03-09 03:55:17,693 lola-8 lola-8.log:Mar 9 03:56:08 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 0:49, of 16 clients 16 recovered and 0 were evicted. lola-8.log:Mar 9 03:56:09 lola-8 kernel: Lustre: soaked-MDT0001: Recovery over after 2:29, of 16 clients 16 recovered and 0 were evicted. mds_restart : 2016-03-09 04:54:33,695 - 2016-03-09 05:02:29,259 lola-10 lola-10.log:Mar 9 05:16:28 lola-10 kernel: Lustre: soaked-MDT0007: Recovery over after 1:42, of 13 clients 0 recovered and 13 were evicted. lola-10.log:Mar 9 05:16:44 lola-10 kernel: Lustre: soaked-MDT0006: Recovery over after 0:32, of 7 clients 0 recovered and 7 were evicted. mds_failover : 2016-03-09 05:08:46,261 - 2016-03-09 05:16:50,083 lola-11 lola-11.log:Mar 9 05:47:15 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 30:24, of 10 clients 10 recovered and 0 were evicted. lola-11.log:Mar 9 05:47:23 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 30:33, of 13 clients 12 recovered and 1 was evicted. This is for build https://build.hpdd.intel.com/job/lustre-reviews/37662/ I'll add message and debug log files asap.
            • The MDS nodes weren't updated with the correct builds. I don't know the reason as I didn't execute the update myself. Anyway, I'm very sorry for that.
            • All nodes (MDSes) have been updated and soak session is restarted using build #37662
            heckes Frank Heckes (Inactive) added a comment - The MDS nodes weren't updated with the correct builds. I don't know the reason as I didn't execute the update myself. Anyway, I'm very sorry for that. All nodes (MDSes) have been updated and soak session is restarted using build #37662
            di.wang Di Wang added a comment -

            [3/8/16, 8:05:50 PM] wangdi: it seems the old code is running
            [3/8/16, 8:05:53 PM] wangdi: 00010000:00000001:13.0:1457452176.471578:0:4691:0:(ldlm_lib.c:756:target_handle_reconnect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e)
            00010000:02000400:13.0:1457452176.471579:0:4691:0:(ldlm_lib.c:1175:target_handle_connect()) soaked-MDT0003: Client soaked-MDT0006-mdtlov_UUID (at 192.168.1.111@o2ib10) refused connection, still busy with 474 references
            [3/8/16, 8:07:39 PM] wangdi: The debug line number seems not matching the patch http://review.whamcloud.com/#/c/18813/2/
            [3/8/16, 8:07:46 PM] wangdi: it seems it matches b2_8
            [3/8/16, 8:08:43 PM] wangdi: Cliff, Frank: could you please check what is running on lola-8. I hope to use this build https://build.hpdd.intel.com/job/lustre-reviews/37662/
            [3/8/16, 8:08:44 PM] wangdi: thanks

            di.wang Di Wang added a comment - [3/8/16, 8:05:50 PM] wangdi: it seems the old code is running [3/8/16, 8:05:53 PM] wangdi: 00010000:00000001:13.0:1457452176.471578:0:4691:0:(ldlm_lib.c:756:target_handle_reconnect()) Process leaving (rc=18446744073709551502 : -114 : ffffffffffffff8e) 00010000:02000400:13.0:1457452176.471579:0:4691:0:(ldlm_lib.c:1175:target_handle_connect()) soaked-MDT0003: Client soaked-MDT0006-mdtlov_UUID (at 192.168.1.111@o2ib10) refused connection, still busy with 474 references [3/8/16, 8:07:39 PM] wangdi: The debug line number seems not matching the patch http://review.whamcloud.com/#/c/18813/2/ [3/8/16, 8:07:46 PM] wangdi: it seems it matches b2_8 [3/8/16, 8:08:43 PM] wangdi: Cliff, Frank: could you please check what is running on lola-8. I hope to use this build https://build.hpdd.intel.com/job/lustre-reviews/37662/ [3/8/16, 8:08:44 PM] wangdi: thanks

            The problem persist after installing build associated with change #18813.
            Still there long recovery times with most all clients recovered or short ones were no client recovers:

            lola-10.log:Mar  8 02:28:45 lola-10 kernel: Lustre: soaked-MDT0005: Recovery over after 183:52, of 16 clients 16 recovered and 0 were evicted.
            lola-10.log:Mar  8 02:28:46 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 183:04, of 16 clients 16 recovered and 0 were evicted.
            lola-10.log:Mar  8 05:01:17 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 14:17, of 16 clients 13 recovered and 3 were evicted.
            lola-11.log:Mar  8 02:33:20 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 112:56, of 16 clients 16 recovered and 0 were evicted.
            lola-11.log:Mar  8 02:33:21 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 113:44, of 16 clients 16 recovered and 0 were evicted.
            lola-8.log:Mar  8 02:27:59 lola-8 kernel: Lustre: soaked-MDT0003: Recovery over after 0:45, of 16 clients 0 recovered and 16 were evicted.
            lola-8.log:Mar  8 02:28:10 lola-8 kernel: Lustre: soaked-MDT0002: Recovery over after 0:26, of 16 clients 0 recovered and 16 were evicted.
            lola-8.log:Mar  8 02:37:39 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 63:43, of 16 clients 16 recovered and 0 were evicted.
            lola-9.log:Mar  8 01:32:31 lola-9 kernel: Lustre: soaked-MDT0001: Recovery over after 1:03, of 16 clients 0 recovered and 16 were evicted.
            lola-9.log:Mar  8 01:33:23 lola-9 kernel: Lustre: soaked-MDT0000: Recovery over after 1:10, of 16 clients 0 recovered and 16 were evicted.
            lola-9.log:Mar  8 02:41:56 lola-9 kernel: Lustre: soaked-MDT0003: Recovery over after 13:41, of 16 clients 16 recovered and 0 were evicted.
            lola-9.log:Mar  8 02:41:58 lola-9 kernel: Lustre: soaked-MDT0002: Recovery over after 13:43, of 16 clients 16 recovered and 0 were evicted.
            
            heckes Frank Heckes (Inactive) added a comment - The problem persist after installing build associated with change #18813. Still there long recovery times with most all clients recovered or short ones were no client recovers: lola-10.log:Mar 8 02:28:45 lola-10 kernel: Lustre: soaked-MDT0005: Recovery over after 183:52, of 16 clients 16 recovered and 0 were evicted. lola-10.log:Mar 8 02:28:46 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 183:04, of 16 clients 16 recovered and 0 were evicted. lola-10.log:Mar 8 05:01:17 lola-10 kernel: Lustre: soaked-MDT0004: Recovery over after 14:17, of 16 clients 13 recovered and 3 were evicted. lola-11.log:Mar 8 02:33:20 lola-11 kernel: Lustre: soaked-MDT0006: Recovery over after 112:56, of 16 clients 16 recovered and 0 were evicted. lola-11.log:Mar 8 02:33:21 lola-11 kernel: Lustre: soaked-MDT0007: Recovery over after 113:44, of 16 clients 16 recovered and 0 were evicted. lola-8.log:Mar 8 02:27:59 lola-8 kernel: Lustre: soaked-MDT0003: Recovery over after 0:45, of 16 clients 0 recovered and 16 were evicted. lola-8.log:Mar 8 02:28:10 lola-8 kernel: Lustre: soaked-MDT0002: Recovery over after 0:26, of 16 clients 0 recovered and 16 were evicted. lola-8.log:Mar 8 02:37:39 lola-8 kernel: Lustre: soaked-MDT0000: Recovery over after 63:43, of 16 clients 16 recovered and 0 were evicted. lola-9.log:Mar 8 01:32:31 lola-9 kernel: Lustre: soaked-MDT0001: Recovery over after 1:03, of 16 clients 0 recovered and 16 were evicted. lola-9.log:Mar 8 01:33:23 lola-9 kernel: Lustre: soaked-MDT0000: Recovery over after 1:10, of 16 clients 0 recovered and 16 were evicted. lola-9.log:Mar 8 02:41:56 lola-9 kernel: Lustre: soaked-MDT0003: Recovery over after 13:41, of 16 clients 16 recovered and 0 were evicted. lola-9.log:Mar 8 02:41:58 lola-9 kernel: Lustre: soaked-MDT0002: Recovery over after 13:43, of 16 clients 16 recovered and 0 were evicted.

            People

              di.wang Di Wang
              heckes Frank Heckes (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: