<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:17:12 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-8399] MDT hung at lu_object_find_at during umount</title>
                <link>https://jira.whamcloud.com/browse/LU-8399</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity-scrub test_1c and test_4a are timing out with a hang in unmount of a MDT. All failures so far are in review-dne. &lt;/p&gt;

&lt;p&gt;From the test_log, we see the test is umounting MDTs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&#8230;
stop mds1
CMD: onyx-33vm7 grep -c /mnt/lustre-mds1&apos; &apos; /proc/mounts
CMD: onyx-33vm7 umount -d /mnt/lustre-mds1
CMD: onyx-33vm7 lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp; lctl dl | grep &apos; ST &apos;
stop mds2
CMD: onyx-33vm3 grep -c /mnt/lustre-mds2&apos; &apos; /proc/mounts
CMD: onyx-33vm3 umount -d /mnt/lustre-mds2
CMD: onyx-33vm3 lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp; lctl dl | grep &apos; ST &apos;
stop mds3
CMD: onyx-33vm7 grep -c /mnt/lustre-mds3&apos; &apos; /proc/mounts
CMD: onyx-33vm7 umount -d /mnt/lustre-mds3
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at the MDS that is in the process of unmounting an MDT, in the console logs, we see errors when trying to unmount and the stack trace:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;06:49:24:[12121.531608] Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds3
06:49:24:[12141.673117] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:49:24:[12141.681956] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:49:24:[12141.688142] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:49:24:[12161.692141] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:49:24:[12161.701736] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:49:24:[12161.708145] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:49:24:[12181.712143] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:49:24:[12181.722112] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:49:24:[12181.729016] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:49:24:[12201.733130] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:49:24:[12201.742520] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:49:24:[12201.749824] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:49:24:[12221.754157] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:49:24:[12221.763268] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:49:24:[12221.770918] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:49:24:[12241.776150] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:49:24:[12241.784285] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:49:24:[12241.792494] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:49:24:[12261.798128] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:54:55:[12261.805202] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:54:55:[12261.813777] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:54:55:[12301.819116] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:54:55:[12301.825881] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) Skipped 1 previous similar message
06:54:55:[12301.831998] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:54:55:[12301.840643] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) Skipped 1 previous similar message
06:54:55:[12301.843920] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
06:54:55:[12301.849581] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) Skipped 1 previous similar message
06:54:55:[12360.587118] INFO: task osp_up0-2:22450 blocked for more than 120 seconds.
06:54:55:[12360.592257] &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.
06:54:55:[12360.595199] osp_up0-2       D ffffc90004019000     0 22450      2 0x00000080
06:54:55:[12360.597925]  ffff88007a277a98 0000000000000046 ffff8800441b8000 ffff88007a277fd8
06:54:55:[12360.600858]  ffff88007a277fd8 ffff88007a277fd8 ffff8800441b8000 ffff88004435c240
06:54:55:[12360.603645]  ffff8800435be000 ffff880042fd6020 0000000000000000 ffffc90004019000
06:54:55:[12360.606403] Call Trace:
06:54:55:[12360.608738]  [&amp;lt;ffffffff8163ba29&amp;gt;] schedule+0x29/0x70
06:54:55:[12360.611237]  [&amp;lt;ffffffffa07f797d&amp;gt;] lu_object_find_at+0x4d/0xe0 [obdclass]
06:54:55:[12360.613840]  [&amp;lt;ffffffff810b88c0&amp;gt;] ? wake_up_state+0x20/0x20
06:54:55:[12360.616305]  [&amp;lt;ffffffffa07f7e3f&amp;gt;] lu_object_find_slice+0x1f/0x90 [obdclass]
06:54:55:[12360.618842]  [&amp;lt;ffffffffa0f7c5f4&amp;gt;] osp_trans_stop_cb+0x1b4/0x2c0 [osp]
06:54:55:[12360.621315]  [&amp;lt;ffffffffa0f7eeeb&amp;gt;] osp_update_interpret+0x21b/0x4b0 [osp]
06:54:55:[12360.623814]  [&amp;lt;ffffffffa0a0d725&amp;gt;] ptlrpc_check_set.part.23+0x425/0x1dd0 [ptlrpc]
06:54:55:[12360.626273]  [&amp;lt;ffffffff8108bf50&amp;gt;] ? internal_add_timer+0x70/0x70
06:54:55:[12360.628649]  [&amp;lt;ffffffffa0a0f12b&amp;gt;] ptlrpc_check_set+0x5b/0xe0 [ptlrpc]
06:54:55:[12360.631038]  [&amp;lt;ffffffffa0a0f6a1&amp;gt;] ptlrpc_set_wait+0x4f1/0x900 [ptlrpc]
06:54:55:[12360.633299]  [&amp;lt;ffffffff810b88c0&amp;gt;] ? wake_up_state+0x20/0x20
06:54:55:[12360.635543]  [&amp;lt;ffffffffa0a0fb2d&amp;gt;] ptlrpc_queue_wait+0x7d/0x220 [ptlrpc]
06:54:55:[12360.637787]  [&amp;lt;ffffffffa0f7f6b2&amp;gt;] osp_send_update_req+0x1c2/0x830 [osp]
06:54:55:[12360.640042]  [&amp;lt;ffffffffa0f80573&amp;gt;] osp_send_update_thread+0x233/0x5e0 [osp]
06:54:55:[12360.642227]  [&amp;lt;ffffffff810b88c0&amp;gt;] ? wake_up_state+0x20/0x20
06:54:55:[12360.644346]  [&amp;lt;ffffffffa0f80340&amp;gt;] ? osp_invalidate_request+0x370/0x370 [osp]
06:54:55:[12360.646486]  [&amp;lt;ffffffff810a5aef&amp;gt;] kthread+0xcf/0xe0
06:54:55:[12360.648492]  [&amp;lt;ffffffff810a5a20&amp;gt;] ? kthread_create_on_node+0x140/0x140
06:54:55:[12360.650536]  [&amp;lt;ffffffff816469d8&amp;gt;] ret_from_fork+0x58/0x90
06:54:55:[12360.652551]  [&amp;lt;ffffffff810a5a20&amp;gt;] ? kthread_create_on_node+0x140/0x140
06:54:55:[12372.704108] Lustre: 6959:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1468417841/real 1468417841]  req@ffff880054be9500 x1539734990553872/t0(0) o250-&amp;gt;MGC10.2.4.126@tcp@0@lo:26/25 lens 520/544 e 0 to 1 dl 1468417866 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
06:54:55:[12372.714842] Lustre: 6959:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 46 previous similar messages
06:54:55:[12381.852134] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: rc = -110 waiting for callback (1 != 0)
06:54:55:[12381.861581] LustreError: 23989:0:(import.c:338:ptlrpc_invalidate_import()) Skipped 3 previous similar messages
06:54:55:[12381.864156] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) @@@ still on delayed list  req@ffff880054be8300 x1539734990549200/t0(0) o1000-&amp;gt;lustre-MDT0000-osp-MDT0002@0@lo:24/4 lens 488/192 e 0 to 0 dl 1468417620 ref 2 fl Interpret:ES/0/0 rc -5/-107
06:54:55:[12381.870152] LustreError: 23989:0:(import.c:372:ptlrpc_invalidate_import()) Skipped 3 previous similar messages
06:54:55:[12381.872522] LustreError: 23989:0:(import.c:378:ptlrpc_invalidate_import()) lustre-MDT0000_UUID: Unregistering RPCs found (0). Network is sluggish? Waiting them to error out.
&#8230;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;sanity-scrub test_1c started failing with this error on July 13, 2016. Logs for some of these failures are at:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/c22c1f38-48c5-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/c22c1f38-48c5-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1bf26ff2-48eb-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1bf26ff2-48eb-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/055ea44e-4900-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/055ea44e-4900-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;


&lt;p&gt;sanity-scrub test_4a started failing with this error on July 13, 2016. Logs for some of these failures are at:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a3238bf4-48d8-11e6-9f8e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a3238bf4-48d8-11e6-9f8e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/61c003cc-492d-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/61c003cc-492d-11e6-8968-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/cdb52f20-492e-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/cdb52f20-492e-11e6-8968-5254006e85c2&lt;/a&gt;&lt;/p&gt;</description>
                <environment>autotest review-dne</environment>
        <key id="38179">LU-8399</key>
            <summary>MDT hung at lu_object_find_at during umount</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Thu, 14 Jul 2016 16:16:57 +0000</created>
                <updated>Sat, 6 Aug 2016 08:06:15 +0000</updated>
                            <resolved>Sat, 6 Aug 2016 08:06:15 +0000</resolved>
                                    <version>Lustre 2.9.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="158929" author="sbuisson" created="Fri, 15 Jul 2016 07:27:25 +0000"  >&lt;p&gt;Another occurence of failure on sanity-scrub test 1c on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/3178d7a6-4a0a-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/3178d7a6-4a0a-11e6-8968-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="158930" author="yujian" created="Fri, 15 Jul 2016 07:29:18 +0000"  >&lt;p&gt;This is blocking patch review testing on master branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/1d28e912-49ec-11e6-9f8e-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/1d28e912-49ec-11e6-9f8e-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d519d1f0-49ef-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d519d1f0-49ef-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/50cc4684-49eb-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/50cc4684-49eb-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="158931" author="yong.fan" created="Fri, 15 Jul 2016 08:47:47 +0000"  >&lt;p&gt;According to the logs, it is not the OI scrub logic blocked the lu_object_find(), instead, it happened before the OI scrub. To test OI scrub logic, we need to generate some files and directories. For DNE case, we will:&lt;br/&gt;
1) create some striped directories via scrub_prep();&lt;br/&gt;
2) then stop all the MDTs;&lt;br/&gt;
3) and then mount the MDTs as &quot;ldiskfs&quot; mode to backup/restore.&lt;/p&gt;

&lt;p&gt;The hung happened at the second step. Because MDT1 umount before the MDT2 (or MDT3) flushing the async update OUT RPC, as to the MDT2 (or MDT3) get failure, then osp_trans_stop_cb() will try to purge related OSP object attribute cache, it needs to locate related object firstly. But we are umounting MDT2 (or MDT4) at that time, related object are marked as to be purged from RAM. On the other hand, someone may be referencing the object and waiting for the transaction callback. But the transaction callback is blocked, then deadlock.&lt;/p&gt;

&lt;p&gt;In fact, if the object is purging out of RAM, the osp_invalidate() needs to do nothing. So we can use non-blocked method to locate related object for osp_invalidate() to avoid deadlock.&lt;/p&gt;</comment>
                            <comment id="158932" author="gerrit" created="Fri, 15 Jul 2016 08:48:06 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/21330&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21330&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8399&quot; title=&quot;MDT hung at lu_object_find_at during umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8399&quot;&gt;&lt;del&gt;LU-8399&lt;/del&gt;&lt;/a&gt; osp: non-blocked osp_invalidate&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 2cc1f711f29f5c3f82829b414561e083adaf8c12&lt;/p&gt;</comment>
                            <comment id="159386" author="adilger" created="Wed, 20 Jul 2016 19:12:48 +0000"  >&lt;p&gt;It appears that this problem is related to the patch &lt;a href=&quot;http://review.whamcloud.com/18801&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/18801&lt;/a&gt; &quot;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7782&quot; title=&quot;sanity-scrub test_2: NULL pointer dereference at 0x10 in lu_context_key_get() on mds2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7782&quot;&gt;&lt;del&gt;LU-7782&lt;/del&gt;&lt;/a&gt; scrub: handle slave obj of striped directory&quot;.&lt;/p&gt;</comment>
                            <comment id="159397" author="green" created="Wed, 20 Jul 2016 19:36:04 +0000"  >&lt;p&gt;I reopened &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7782&quot; title=&quot;sanity-scrub test_2: NULL pointer dereference at 0x10 in lu_context_key_get() on mds2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7782&quot;&gt;&lt;del&gt;LU-7782&lt;/del&gt;&lt;/a&gt; and reverted that patch. so we probably should close this as adup of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7782&quot; title=&quot;sanity-scrub test_2: NULL pointer dereference at 0x10 in lu_context_key_get() on mds2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7782&quot;&gt;&lt;del&gt;LU-7782&lt;/del&gt;&lt;/a&gt;.&lt;br/&gt;
The patch from here + whatever necessary followon fixes need to be rolled into new &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7782&quot; title=&quot;sanity-scrub test_2: NULL pointer dereference at 0x10 in lu_context_key_get() on mds2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7782&quot;&gt;&lt;del&gt;LU-7782&lt;/del&gt;&lt;/a&gt; patch in one form or another, I guess.&lt;/p&gt;</comment>
                            <comment id="159398" author="pjones" created="Wed, 20 Jul 2016 19:37:50 +0000"  >&lt;p&gt;makes sense to me&lt;/p&gt;</comment>
                            <comment id="159555" author="yong.fan" created="Thu, 21 Jul 2016 23:46:59 +0000"  >&lt;p&gt;We hit again after the reversion of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7782&quot; title=&quot;sanity-scrub test_2: NULL pointer dereference at 0x10 in lu_context_key_get() on mds2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7782&quot;&gt;&lt;del&gt;LU-7782&lt;/del&gt;&lt;/a&gt; patch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/47596800-4f5b-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/47596800-4f5b-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems that &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7782&quot; title=&quot;sanity-scrub test_2: NULL pointer dereference at 0x10 in lu_context_key_get() on mds2&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7782&quot;&gt;&lt;del&gt;LU-7782&lt;/del&gt;&lt;/a&gt; patch makes the issue to be reproduced relatively easily, but not the root reason.&lt;/p&gt;</comment>
                            <comment id="159562" author="yujian" created="Fri, 22 Jul 2016 05:28:38 +0000"  >&lt;p&gt;One more instance on master branch: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/55571e7a-4fa6-11e6-bf87-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/55571e7a-4fa6-11e6-bf87-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="159677" author="yong.fan" created="Sun, 24 Jul 2016 12:49:28 +0000"  >&lt;p&gt;We still need the patch:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/21330&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21330&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="159880" author="bogl" created="Tue, 26 Jul 2016 14:48:08 +0000"  >&lt;p&gt;another on master, in replay-single test_100b:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/b2da040a-5306-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/b2da040a-5306-11e6-8968-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="160682" author="yujian" created="Wed, 3 Aug 2016 15:40:07 +0000"  >&lt;p&gt;+1 on master, in replay-single test_100b:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/613c47b2-594f-11e6-b5b1-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/613c47b2-594f-11e6-b5b1-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="161017" author="gerrit" created="Sat, 6 Aug 2016 06:23:58 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/21330/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/21330/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8399&quot; title=&quot;MDT hung at lu_object_find_at during umount&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8399&quot;&gt;&lt;del&gt;LU-8399&lt;/del&gt;&lt;/a&gt; osp: direct reference object to be invalidate&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 63ef1b3bb8a0598ea2b2c70b5ad0550680723ab8&lt;/p&gt;</comment>
                            <comment id="161030" author="yong.fan" created="Sat, 6 Aug 2016 08:06:15 +0000"  >&lt;p&gt;The patch has been landed to master.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="34728">LU-7782</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="33420">LU-7513</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzyhhz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>