Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-2827

mdt_intent_fixup_resent() cannot find the proper lock in hash

Details

    • 3
    • 6847

    Description

      If a successful reply is lost for an intent lock request, MDS will not correctly recover from this situation on resend.

      The cause of which seems to be the code of ldlm_handle_enqueue0() and mdt_intent_fixup_resent()

       int ldlm_handle_enqueue0(struct ldlm_namespace *ns,
                               struct ptlrpc_request *req,
                               const struct ldlm_request *dlm_req,
                               const struct ldlm_callback_suite *cbs)
      {
      ...
              /* The lock's callback data might be set in the policy function */
              lock = ldlm_lock_create(ns, &dlm_req->lock_desc.l_resource.lr_name,
                                      dlm_req->lock_desc.l_resource.lr_type,
                                      dlm_req->lock_desc.l_req_mode,
                                      cbs, NULL, 0);
      ...
              lock->l_export = class_export_lock_get(req->rq_export, lock);
              if (lock->l_export->exp_lock_hash) {
                      cfs_hash_add(lock->l_export->exp_lock_hash,
                                   &lock->l_remote_handle,
                                   &lock->l_exp_hash);
              }
      ...
              err = ldlm_lock_enqueue(ns, &lock, cookie, &flags);
      ...
      }
      
      static void mdt_intent_fixup_resent(struct mdt_thread_info *info,
                                          struct ldlm_lock *new_lock,
                                          struct ldlm_lock **old_lock,
                                          struct mdt_lock_handle *lh)
      {
              struct ptlrpc_request  *req = mdt_info_req(info);
              struct obd_export      *exp = req->rq_export;
              struct lustre_handle    remote_hdl;
              struct ldlm_request    *dlmreq;
              struct ldlm_lock       *lock;
      
              if (!(lustre_msg_get_flags(req->rq_reqmsg) & MSG_RESENT))
                      return;
      
              dlmreq = req_capsule_client_get(info->mti_pill, &RMF_DLM_REQ);
              remote_hdl = dlmreq->lock_handle[0];
      
              lock = cfs_hash_lookup(exp->exp_lock_hash, &remote_hdl);
              if (lock) {
                      if (lock != new_lock) {
      ...
      }
      

      On resend, ldlm_handle_enqueue0() add the new lock into hash even though there's already a granted lock with the same remote handle. mdt_intent_fixup_resent() will find the newly added lock in hash and ignore it. This will cause to an enqueue request on the newly created lock, deadlock and client eviction.

      Alexey thinks that the problem has existed since we moved from the correct code:

      static void fixup_handle_for_resent_req(struct ptlrpc_request *req, int offset,
                                              struct ldlm_lock *new_lock,
                                              struct ldlm_lock **old_lock,
                                              struct lustre_handle *lockh)
      {
              struct obd_export *exp = req->rq_export;
              struct ldlm_request *dlmreq =
                      lustre_msg_buf(req->rq_reqmsg, offset, sizeof(*dlmreq));
              struct lustre_handle remote_hdl = dlmreq->lock_handle[0];
              struct list_head *iter;
      
              if (!(lustre_msg_get_flags(req->rq_reqmsg) & MSG_RESENT))
                      return;
      
              spin_lock(&exp->exp_ldlm_data.led_lock);
              list_for_each(iter, &exp->exp_ldlm_data.led_held_locks) {
                      struct ldlm_lock *lock;
                      lock = list_entry(iter, struct ldlm_lock, l_export_chain);
                      if (lock == new_lock)
                              continue; <==================== N.B.
                      if (lock->l_remote_handle.cookie == remote_hdl.cookie) {
                              lockh->cookie = lock->l_handle.h_cookie;
                              LDLM_DEBUG(lock, "restoring lock cookie");
                              DEBUG_REQ(D_DLMTRACE, req,"restoring lock cookie "LPX64,
                                        lockh->cookie);
                              if (old_lock)
                                      *old_lock = LDLM_LOCK_GET(lock);
                              spin_unlock(&exp->exp_ldlm_data.led_lock);
                              return;
                      }
              }
      ...
      }
      

      Logs for this issue will follow.

      Attachments

        Issue Links

          Activity

            [LU-2827] mdt_intent_fixup_resent() cannot find the proper lock in hash
            bfaccini Bruno Faccini (Inactive) added a comment - - edited

            The merge was done to avoid the same miss with original patch where mdt_intent_layout() had been forgotten because not present in Xyratex source-tree at this time.

            BTW, my b2_4 patch/back-port has some problem and needs some re-work, because MDS bombs with "(ldlm_lock.c:851:ldlm_lock_decref_internal_nolock()) ASSERTION( lock->l_readers > 0 ) failed" when running LLNL reproducer from LU-4584 or recovery-small/test_53 in auto-tests.
            More to come, crash-dump is under investigations ...

            bfaccini Bruno Faccini (Inactive) added a comment - - edited The merge was done to avoid the same miss with original patch where mdt_intent_layout() had been forgotten because not present in Xyratex source-tree at this time. BTW, my b2_4 patch/back-port has some problem and needs some re-work, because MDS bombs with "(ldlm_lock.c:851:ldlm_lock_decref_internal_nolock()) ASSERTION( lock->l_readers > 0 ) failed" when running LLNL reproducer from LU-4584 or recovery-small/test_53 in auto-tests. More to come, crash-dump is under investigations ...

            We would really, really prefer that you guys not merge together multiple patches when backporting. That makes sanity checking and rebasing quite a bit more complicated for us.

            morrone Christopher Morrone (Inactive) added a comment - We would really, really prefer that you guys not merge together multiple patches when backporting. That makes sanity checking and rebasing quite a bit more complicated for us.

            Merged b2_4 backport of both #5978 and #10378 master changes for this ticket, is at http://review.whamcloud.com/10902.

            bfaccini Bruno Faccini (Inactive) added a comment - Merged b2_4 backport of both #5978 and #10378 master changes for this ticket, is at http://review.whamcloud.com/10902 .
            pjones Peter Jones added a comment -

            Landed for 2.6

            pjones Peter Jones added a comment - Landed for 2.6
            jlevi Jodi Levi (Inactive) added a comment - - edited http://review.whamcloud.com/#/c/5978/ http://review.whamcloud.com/#/c/10378/

            I tested master with the patches as well and got the same results.

            simmonsja James A Simmons added a comment - I tested master with the patches as well and got the same results.
            simmonsja James A Simmons added a comment - - edited

            With the patches the evictions are gone which is good. Now its just locking up.

            simmonsja James A Simmons added a comment - - edited With the patches the evictions are gone which is good. Now its just locking up.

            I am afraid I am not able to conclude about what is going wrong, using this only MDS Lustre debug-log. Are there still similar stacks being dumped in dmesg/syslog ?

            Also, I did not find any eviction trace in this log.

            bfaccini Bruno Faccini (Inactive) added a comment - I am afraid I am not able to conclude about what is going wrong, using this only MDS Lustre debug-log. Are there still similar stacks being dumped in dmesg/syslog ? Also, I did not find any eviction trace in this log.

            IOR single shared file run is producing the below messages. Also I have a log dumps that I uploaded to

            ftp.whamcloud.com/uploads/LU-2827/lustre-log-LU-2827-29-05-2014

            May 29 18:47:39 atlas-tds-mds1 kernel: [12046.175064] Lustre: 17065:0:(service.c:1349:ptlrpc_at_send_early_reply()) Skipped 8 previous similar messages
            May 29 18:47:45 atlas-tds-mds1 kernel: [12052.188755] Lustre: atlastds-MDT0000: Client 23d4a774-8c3b-4edc-3b98-24d112966233 (at 14@gni2) reconnecting
            May 29 18:47:45 atlas-tds-mds1 kernel: [12052.199379] Lustre: Skipped 1 previous similar message
            May 29 19:00:15 atlas-tds-mds1 kernel: [12801.706037] Lustre: 17829:0:(service.c:1349:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply
            May 29 19:00:15 atlas-tds-mds1 kernel: [12801.706039] req@ffff8807e74adc00 x1468567563329976/t0(0) o101->73ecb9e8-d229-752a-3cfc-2036f509fb5b@50@gni2:0/0 lens 512/3512 e 0 to 0 dl 1401404420 ref 2 fl Interpret:/0/0 rc 0/0
            May 29 19:00:15 atlas-tds-mds1 kernel: [12802.233175] Lustre: 17062:0:(service.c:1349:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply
            May 29 19:00:15 atlas-tds-mds1 kernel: [12802.233176] req@ffff8807e76d6400 x1468567565457596/t0(0) o101->e8d3ca06-b910-df85-78e9-5653469b4bb8@81@gni2:0/0 lens 512/3512 e 0 to 0 dl 1401404420 ref 2 fl Interpret:/0/0 rc 0/0
            May 29 19:00:15 atlas-tds-mds1 kernel: [12802.263339] Lustre: 17062:0:(service.c:1349:ptlrpc_at_send_early_reply()) Skipped 4 previous similar messages
            May 29 19:00:21 atlas-tds-mds1 kernel: [12808.519392] Lustre: atlastds-MDT0000: Client 568959a5-c11b-5ded-e1c9-1c90892a82ca (at 49@gni2) reconnecting

            simmonsja James A Simmons added a comment - IOR single shared file run is producing the below messages. Also I have a log dumps that I uploaded to ftp.whamcloud.com/uploads/ LU-2827 /lustre-log- LU-2827 -29-05-2014 May 29 18:47:39 atlas-tds-mds1 kernel: [12046.175064] Lustre: 17065:0:(service.c:1349:ptlrpc_at_send_early_reply()) Skipped 8 previous similar messages May 29 18:47:45 atlas-tds-mds1 kernel: [12052.188755] Lustre: atlastds-MDT0000: Client 23d4a774-8c3b-4edc-3b98-24d112966233 (at 14@gni2) reconnecting May 29 18:47:45 atlas-tds-mds1 kernel: [12052.199379] Lustre: Skipped 1 previous similar message May 29 19:00:15 atlas-tds-mds1 kernel: [12801.706037] Lustre: 17829:0:(service.c:1349:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply May 29 19:00:15 atlas-tds-mds1 kernel: [12801.706039] req@ffff8807e74adc00 x1468567563329976/t0(0) o101->73ecb9e8-d229-752a-3cfc-2036f509fb5b@50@gni2:0/0 lens 512/3512 e 0 to 0 dl 1401404420 ref 2 fl Interpret:/0/0 rc 0/0 May 29 19:00:15 atlas-tds-mds1 kernel: [12802.233175] Lustre: 17062:0:(service.c:1349:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/-150), not sending early reply May 29 19:00:15 atlas-tds-mds1 kernel: [12802.233176] req@ffff8807e76d6400 x1468567565457596/t0(0) o101->e8d3ca06-b910-df85-78e9-5653469b4bb8@81@gni2:0/0 lens 512/3512 e 0 to 0 dl 1401404420 ref 2 fl Interpret:/0/0 rc 0/0 May 29 19:00:15 atlas-tds-mds1 kernel: [12802.263339] Lustre: 17062:0:(service.c:1349:ptlrpc_at_send_early_reply()) Skipped 4 previous similar messages May 29 19:00:21 atlas-tds-mds1 kernel: [12808.519392] Lustre: atlastds-MDT0000: Client 568959a5-c11b-5ded-e1c9-1c90892a82ca (at 49@gni2) reconnecting

            Tested it first on my smallest size test bed and the two patches that I combined and back port to b2_5 did pretty well. So I moved the test to a larger system and when I went to run my simul reproducer I now get this:

            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.308755] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.317414] mdt02_247 D 0000000000000008 0 17580 2 0x00000000
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.324711] ffff88104d7b98c8 0000000000000046 0000000000000000 ffffffffa0e0f4ab
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.333113] ffff8810719e5310 ffff8810719e52b8 ffff88104d7a5538 ffffffffa0e0f4ab
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.341442] ffff88104d7b7af8 ffff88104d7b9fd8 000000000000fbc8 ffff88104d7b7af8
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.349651] Call Trace:
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.352455] [<ffffffff8152a6d5>] rwsem_down_failed_common+0x95/0x1d0
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.359226] [<ffffffffa0c0c32b>] ? ldiskfs_xattr_trusted_get+0x2b/0x30 [ldiskfs]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.367490] [<ffffffff811ae017>] ? generic_getxattr+0x87/0x90
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.373729] [<ffffffff8152a866>] rwsem_down_read_failed+0x26/0x30
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.380272] [<ffffffffa08b7083>] ? lod_xattr_get+0x153/0x420 [lod]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.386972] [<ffffffff8128eab4>] call_rwsem_down_read_failed+0x14/0x30
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.393901] [<ffffffff81529d64>] ? down_read+0x24/0x30
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.399595] [<ffffffffa0dd949d>] mdt_object_open_lock+0x1ed/0x9d0 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.406680] [<ffffffffa0dbb7ac>] ? mdt_attr_get_complex+0x4ec/0x770 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.413960] [<ffffffffa0de1ac7>] mdt_reint_open+0x15b7/0x2150 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.420672] [<ffffffffa0453f76>] ? upcall_cache_get_entry+0x296/0x880 [libcfs]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.428803] [<ffffffffa05f3700>] ? lu_ucred+0x20/0x30 [obdclass]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.435224] [<ffffffffa0dca441>] mdt_reint_rec+0x41/0xe0 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.441611] [<ffffffffa0dafe63>] mdt_reint_internal+0x4c3/0x780 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.448515] [<ffffffffa0db03ee>] mdt_intent_reint+0x1ee/0x520 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.455274] [<ffffffffa0dadbce>] mdt_intent_policy+0x3ae/0x770 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.462212] [<ffffffffa070e511>] ldlm_lock_enqueue+0x361/0x8c0 [ptlrpc]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.469289] [<ffffffffa0737b9f>] ldlm_handle_enqueue0+0x51f/0x10f0 [ptlrpc]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.476684] [<ffffffffa0dae096>] mdt_enqueue+0x46/0xe0 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.482790] [<ffffffffa0db2c5a>] mdt_handle_common+0x52a/0x1470 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.489687] [<ffffffffa0def765>] mds_regular_handle+0x15/0x20 [mdt]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.496399] [<ffffffffa0767c25>] ptlrpc_server_handle_request+0x385/0xc00 [ptlrpc]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.504762] [<ffffffffa04384ce>] ? cfs_timer_arm+0xe/0x10 [libcfs]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.511492] [<ffffffffa04493cf>] ? lc_watchdog_touch+0x6f/0x170 [libcfs]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.518772] [<ffffffffa075f2a9>] ? ptlrpc_wait_event+0xa9/0x2d0 [ptlrpc]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.525902] [<ffffffff810546b9>] ? __wake_up_common+0x59/0x90
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.532184] [<ffffffffa0768f8d>] ptlrpc_main+0xaed/0x1920 [ptlrpc]
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.545438] [<ffffffff8109ab56>] kthread+0x96/0xa0
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.550656] [<ffffffff8100c20a>] child_rip+0xa/0x20
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.556068] [<ffffffff8109aac0>] ? kthread+0x0/0xa0
            May 29 18:27:03 atlas-tds-mds1 kernel: [10809.561451] [<ffffffff8100c200>] ? child_rip+0x0/0x20

            Its better in that the MDS doesn't assert any more.I also canceled the job and the system recovered which is a good sign.

            simmonsja James A Simmons added a comment - Tested it first on my smallest size test bed and the two patches that I combined and back port to b2_5 did pretty well. So I moved the test to a larger system and when I went to run my simul reproducer I now get this: May 29 18:27:03 atlas-tds-mds1 kernel: [10809.308755] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. May 29 18:27:03 atlas-tds-mds1 kernel: [10809.317414] mdt02_247 D 0000000000000008 0 17580 2 0x00000000 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.324711] ffff88104d7b98c8 0000000000000046 0000000000000000 ffffffffa0e0f4ab May 29 18:27:03 atlas-tds-mds1 kernel: [10809.333113] ffff8810719e5310 ffff8810719e52b8 ffff88104d7a5538 ffffffffa0e0f4ab May 29 18:27:03 atlas-tds-mds1 kernel: [10809.341442] ffff88104d7b7af8 ffff88104d7b9fd8 000000000000fbc8 ffff88104d7b7af8 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.349651] Call Trace: May 29 18:27:03 atlas-tds-mds1 kernel: [10809.352455] [<ffffffff8152a6d5>] rwsem_down_failed_common+0x95/0x1d0 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.359226] [<ffffffffa0c0c32b>] ? ldiskfs_xattr_trusted_get+0x2b/0x30 [ldiskfs] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.367490] [<ffffffff811ae017>] ? generic_getxattr+0x87/0x90 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.373729] [<ffffffff8152a866>] rwsem_down_read_failed+0x26/0x30 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.380272] [<ffffffffa08b7083>] ? lod_xattr_get+0x153/0x420 [lod] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.386972] [<ffffffff8128eab4>] call_rwsem_down_read_failed+0x14/0x30 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.393901] [<ffffffff81529d64>] ? down_read+0x24/0x30 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.399595] [<ffffffffa0dd949d>] mdt_object_open_lock+0x1ed/0x9d0 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.406680] [<ffffffffa0dbb7ac>] ? mdt_attr_get_complex+0x4ec/0x770 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.413960] [<ffffffffa0de1ac7>] mdt_reint_open+0x15b7/0x2150 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.420672] [<ffffffffa0453f76>] ? upcall_cache_get_entry+0x296/0x880 [libcfs] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.428803] [<ffffffffa05f3700>] ? lu_ucred+0x20/0x30 [obdclass] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.435224] [<ffffffffa0dca441>] mdt_reint_rec+0x41/0xe0 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.441611] [<ffffffffa0dafe63>] mdt_reint_internal+0x4c3/0x780 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.448515] [<ffffffffa0db03ee>] mdt_intent_reint+0x1ee/0x520 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.455274] [<ffffffffa0dadbce>] mdt_intent_policy+0x3ae/0x770 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.462212] [<ffffffffa070e511>] ldlm_lock_enqueue+0x361/0x8c0 [ptlrpc] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.469289] [<ffffffffa0737b9f>] ldlm_handle_enqueue0+0x51f/0x10f0 [ptlrpc] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.476684] [<ffffffffa0dae096>] mdt_enqueue+0x46/0xe0 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.482790] [<ffffffffa0db2c5a>] mdt_handle_common+0x52a/0x1470 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.489687] [<ffffffffa0def765>] mds_regular_handle+0x15/0x20 [mdt] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.496399] [<ffffffffa0767c25>] ptlrpc_server_handle_request+0x385/0xc00 [ptlrpc] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.504762] [<ffffffffa04384ce>] ? cfs_timer_arm+0xe/0x10 [libcfs] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.511492] [<ffffffffa04493cf>] ? lc_watchdog_touch+0x6f/0x170 [libcfs] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.518772] [<ffffffffa075f2a9>] ? ptlrpc_wait_event+0xa9/0x2d0 [ptlrpc] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.525902] [<ffffffff810546b9>] ? __wake_up_common+0x59/0x90 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.532184] [<ffffffffa0768f8d>] ptlrpc_main+0xaed/0x1920 [ptlrpc] May 29 18:27:03 atlas-tds-mds1 kernel: [10809.545438] [<ffffffff8109ab56>] kthread+0x96/0xa0 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.550656] [<ffffffff8100c20a>] child_rip+0xa/0x20 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.556068] [<ffffffff8109aac0>] ? kthread+0x0/0xa0 May 29 18:27:03 atlas-tds-mds1 kernel: [10809.561451] [<ffffffff8100c200>] ? child_rip+0x0/0x20 Its better in that the MDS doesn't assert any more.I also canceled the job and the system recovered which is a good sign.

            Here is the b2_5 version of the patch I'm testing with:

            http://review.whamcloud.com/#/c/10492

            simmonsja James A Simmons added a comment - Here is the b2_5 version of the patch I'm testing with: http://review.whamcloud.com/#/c/10492

            People

              yujian Jian Yu
              panda Andrew Perepechko
              Votes:
              0 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: