<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:52:39 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5573] Test timeout conf-sanity test_41c</title>
                <link>https://jira.whamcloud.com/browse/LU-5573</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for Nathaniel Clark &amp;lt;nathaniel.l.clark@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4e98188c-1fe2-11e4-8610-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4e98188c-1fe2-11e4-8610-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/6798b742-32d9-11e4-aefc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/6798b742-32d9-11e4-aefc-5254006e85c2&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/a8565b52-3286-11e4-aefc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/a8565b52-3286-11e4-aefc-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The sub-test test_41c failed with the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;test failed to respond and timed out&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Info required for matching: conf-sanity 41c&lt;/p&gt;</description>
                <environment></environment>
        <key id="26271">LU-5573</key>
            <summary>Test timeout conf-sanity test_41c</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                            <label>MB</label>
                    </labels>
                <created>Tue, 2 Sep 2014 18:42:59 +0000</created>
                <updated>Fri, 1 May 2015 17:19:47 +0000</updated>
                            <resolved>Mon, 27 Oct 2014 12:49:35 +0000</resolved>
                                    <version>Lustre 2.7.0</version>
                                    <fixVersion>Lustre 2.7.0</fixVersion>
                    <fixVersion>Lustre 2.5.4</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="93042" author="utopiabound" created="Tue, 2 Sep 2014 21:09:46 +0000"  >&lt;p&gt;This test was introduced in patch &lt;a href=&quot;http://review.whamcloud.com/11139&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/11139&lt;/a&gt; and also failed it during autotest:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/4e98188c-1fe2-11e4-8610-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/4e98188c-1fe2-11e4-8610-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="93065" author="bfaccini" created="Tue, 2 Sep 2014 23:47:37 +0000"  >&lt;p&gt;Hello Nathaniel,&lt;br/&gt;
I believe that &quot;https://testing.hpdd.intel.com/test_sets/4e98188c-1fe2-11e4-8610-5254006e85c2&quot; was a different problem (and LBUG!) and only occured at the time I was trying to introduce a delay to try better ensure a race between multiple+concurrent MDT/OST mounts.&lt;/p&gt;

&lt;p&gt;BTW, seems that the way I try to delay the 1st mount is still having other side effects causing these new failures in auto-tests.&lt;br/&gt;
I think I will need to remove it and simply run new mount attempt as quick as possible to trigger the race.&lt;/p&gt;

&lt;p&gt;Will push a patch soon.&lt;/p&gt;</comment>
                            <comment id="93177" author="bfaccini" created="Thu, 4 Sep 2014 07:59:50 +0000"  >&lt;p&gt;Even if extensive testing of my patch for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5299&quot; title=&quot;osd_start() LBUG when doing parallel mount of the same target&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5299&quot;&gt;&lt;del&gt;LU-5299&lt;/del&gt;&lt;/a&gt; did not show any problem, seems that under some auto-tests conditions/configurations conf-sanity/test_41c sub-test, that has been introduced by patch, triggers quite frequent MDS/OSS LBUGs.&lt;/p&gt;</comment>
                            <comment id="93178" author="bfaccini" created="Thu, 4 Sep 2014 08:34:23 +0000"  >&lt;p&gt;I checked conf-sanity/test_41c failures since my patch has landed and found about 15 failures in about a month with less frequent MDS LBUG during concurrent MDT mounts/starts in test_41c that always looks like :&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;19:58:50:Lustre: DEBUG MARKER: == conf-sanity test 41c: concurent mounts of MDT/OST should all fail but one == 01:58:15 (1409623095)
19:58:50:Lustre: DEBUG MARKER: grep -c /mnt/mds1&apos; &apos; /proc/mounts
19:58:50:Lustre: DEBUG MARKER: lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp; lctl dl | grep &apos; ST &apos;
19:58:50:Lustre: DEBUG MARKER: lctl set_param fail_loc=0x703
19:58:50:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1
19:58:50:Lustre: DEBUG MARKER: test -b /dev/lvm-Role_MDS/P1
19:58:50:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1; mount -t lustre   		                   /dev/lvm-Role_MDS/P1 /mnt/mds1
19:58:50:LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. quota=on. Opts: 
19:58:50:LustreError: 27017:0:(fail.c:133:__cfs_fail_timeout_set()) cfs_fail_timeout id 703 sleeping for 40000ms
19:58:50:Lustre: DEBUG MARKER: lctl set_param fail_loc=0x0
19:58:50:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1
19:58:50:Lustre: DEBUG MARKER: test -b /dev/lvm-Role_MDS/P1
19:58:50:Lustre: DEBUG MARKER: mkdir -p /mnt/mds1; mount -t lustre   		                   /dev/lvm-Role_MDS/P1 /mnt/mds1
19:58:50:LustreError: 15d-9: The MGS service was already started from server
19:58:50:LustreError: 27234:0:(obd_mount_server.c:865:lustre_disconnect_lwp()) lustre-MDT0000-lwp-MDT0000: Can&apos;t end config log lustre-client.
19:58:50:LustreError: 27234:0:(obd_mount_server.c:1443:server_put_super()) lustre-MDT0000: failed to disconnect lwp. (rc=-2)
19:58:50:LustreError: 27234:0:(obd_mount_server.c:1473:server_put_super()) no obd lustre-MDT0000
19:58:50:LustreError: 27234:0:(obd_mount_server.c:135:server_deregister_mount()) lustre-MDT0000 not registered
19:58:50:Lustre: MGS: Not available for connect from 0@lo (stopping)
19:58:50:LustreError: 26988:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff88006abf3400 x1478095423936308/t0(0) o253-&amp;gt;MGC10.1.6.34@tcp@0@lo:26/25 lens 4768/4768 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
19:58:50:LustreError: 26988:0:(obd_mount_server.c:1140:server_register_target()) lustre-MDT0000: error registering with the MGS: rc = -5 (not fatal)
19:58:50:LustreError: 26988:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880069a1f400 x1478095423936316/t0(0) o101-&amp;gt;MGC10.1.6.34@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
19:58:50:LustreError: 26988:0:(client.c:1052:ptlrpc_import_delay_req()) @@@ send limit expired   req@ffff880069a1f800 x1478095423936324/t0(0) o101-&amp;gt;MGC10.1.6.34@tcp@0@lo:26/25 lens 328/344 e 0 to 0 dl 0 ref 2 fl Rpc:W/0/ffffffff rc 0/-1
19:58:50:LustreError: 15c-8: MGC10.1.6.34@tcp: The configuration from log &apos;lustre-MDT0000&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
19:58:50:LustreError: 26988:0:(obd_mount_server.c:1274:server_start_targets()) failed to start server lustre-MDT0000: -5
19:58:50:LustreError: 26988:0:(obd_mount_server.c:1716:server_fill_super()) Unable to start targets: -5
19:58:50:LustreError: 26988:0:(obd_mount_server.c:865:lustre_disconnect_lwp()) lustre-MDT0000-lwp-MDT0000: Can&apos;t end config log lustre-client.
19:58:50:LustreError: 26988:0:(obd_mount_server.c:1443:server_put_super()) lustre-MDT0000: failed to disconnect lwp. (rc=-2)
19:59:12:LustreError: 26988:0:(obd_mount_server.c:1473:server_put_super()) no obd lustre-MDT0000
19:59:12:LustreError: 26988:0:(obd_config.c:626:class_cleanup()) OBD 1 already stopping
19:59:12:LustreError: 26988:0:(obd_config.c:585:class_detach()) OBD device 1 still set up
19:59:12:LustreError: 26988:0:(obd_mount.c:1289:lustre_fill_super()) Unable to mount /dev/mapper/lvm--Role_MDS-P1 (-5)
19:59:12:Lustre: DEBUG MARKER: lctl set_param -n mdt.lustre*.enable_remote_dir=1
19:59:12:LustreError: 27017:0:(fail.c:137:__cfs_fail_timeout_set()) cfs_fail_timeout id 703 awake
19:59:12:LustreError: 27017:0:(obd_class.h:1008:obd_connect()) Device 1 not setup
19:59:12:Lustre: 27017:0:(service.c:2031:ptlrpc_server_handle_request()) @@@ Request took longer than estimated (20:20s); client may timeout.  req@ffff880058849050 x1478095423936304/t0(0) o250-&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 400/264 e 0 to 0 dl 1409623123 ref 1 fl Complete:/0/0 rc -19/-19
19:59:12:LNet: Service thread pid 27017 stopped after 40.00s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
19:59:12:LustreError: 27234:0:(genops.c:1570:obd_exports_barrier()) ASSERTION( list_empty(&amp;amp;obd-&amp;gt;obd_exports) ) failed: 
19:59:12:LustreError: 27234:0:(genops.c:1570:obd_exports_barrier()) LBUG
19:59:12:Pid: 27234, comm: mount.lustre
19:59:12:
19:59:12:Call Trace:
19:59:12: [&amp;lt;ffffffffa0b29895&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
19:59:12: [&amp;lt;ffffffffa0b29e97&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
19:59:12: [&amp;lt;ffffffffa0c5794a&amp;gt;] obd_exports_barrier+0x16a/0x170 [obdclass]
19:59:12: [&amp;lt;ffffffffa0474a06&amp;gt;] mgs_device_fini+0xf6/0x5a0 [mgs]
19:59:12: [&amp;lt;ffffffff81510ee2&amp;gt;] ? _spin_lock+0x12/0x30
19:59:12: [&amp;lt;ffffffffa0c84c07&amp;gt;] class_cleanup+0x577/0xda0 [obdclass]
19:59:12: [&amp;lt;ffffffffa0c59b36&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
19:59:12: [&amp;lt;ffffffffa0c864ec&amp;gt;] class_process_config+0x10bc/0x1c80 [obdclass]
19:59:12: [&amp;lt;ffffffffa0b34d98&amp;gt;] ? libcfs_log_return+0x28/0x40 [libcfs]
19:59:12: [&amp;lt;ffffffffa0c7fd51&amp;gt;] ? lustre_cfg_new+0x391/0x7e0 [obdclass]
19:59:12: [&amp;lt;ffffffffa0c87229&amp;gt;] class_manual_cleanup+0x179/0x6f0 [obdclass]
19:59:12: [&amp;lt;ffffffffa0c59b36&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
19:59:12: [&amp;lt;ffffffffa0cbc0cd&amp;gt;] server_put_super+0x46d/0xf00 [obdclass]
19:59:12: [&amp;lt;ffffffffa0cc0a88&amp;gt;] server_fill_super+0x668/0x1580 [obdclass]
19:59:12: [&amp;lt;ffffffffa0c91958&amp;gt;] lustre_fill_super+0x1d8/0x530 [obdclass]
19:59:12: [&amp;lt;ffffffffa0c91780&amp;gt;] ? lustre_fill_super+0x0/0x530 [obdclass]
19:59:12: [&amp;lt;ffffffff811845df&amp;gt;] get_sb_nodev+0x5f/0xa0
19:59:12: [&amp;lt;ffffffffa0c89135&amp;gt;] lustre_get_sb+0x25/0x30 [obdclass]
19:59:12: [&amp;lt;ffffffff81183c1b&amp;gt;] vfs_kern_mount+0x7b/0x1b0
19:59:12: [&amp;lt;ffffffff81183dc2&amp;gt;] do_kern_mount+0x52/0x130
19:59:12: [&amp;lt;ffffffff811a3f82&amp;gt;] do_mount+0x2d2/0x8d0
19:59:12: [&amp;lt;ffffffff811a4610&amp;gt;] sys_mount+0x90/0xe0
19:59:12: [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and more frequent OSS LBUG during concurrent OST mounts/starts in test_41c always looks like :&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;04:19:32:Lustre: DEBUG MARKER: == conf-sanity test 41c: concurent mounts of MDT/OST should all fail but one == 10:19:01 (1409739541)
04:19:32:Lustre: DEBUG MARKER: grep -c /mnt/ost1&apos; &apos; /proc/mounts
04:19:32:Lustre: DEBUG MARKER: lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp; lctl dl | grep &apos; ST &apos;
04:19:32:Lustre: DEBUG MARKER: ! zpool list -H lustre-ost1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
04:19:32:			grep -q ^lustre-ost1/ /proc/mounts ||
04:19:32:			zpool export  lustre-ost1
04:19:32:Lustre: DEBUG MARKER: lctl set_param fail_loc=0x703
04:19:32:Lustre: DEBUG MARKER: mkdir -p /mnt/ost1
04:19:32:Lustre: DEBUG MARKER: zpool list -H lustre-ost1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
04:19:32:			zpool import -f -o cachefile=none -d /dev/lvm-Role_OSS lustre-ost1
04:19:32:Lustre: DEBUG MARKER: lctl set_param fail_loc=0x0
04:19:32:Lustre: DEBUG MARKER: mkdir -p /mnt/ost1
04:19:32:Lustre: DEBUG MARKER: zpool list -H lustre-ost1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||
04:19:32:			zpool import -f -o cachefile=none -d /dev/lvm-Role_OSS lustre-ost1
04:19:32:Lustre: DEBUG MARKER: mkdir -p /mnt/ost1; mount -t lustre   		                   lustre-ost1/ost1 /mnt/ost1
04:19:32:Lustre: DEBUG MARKER: mkdir -p /mnt/ost1; mount -t lustre   		                   lustre-ost1/ost1 /mnt/ost1
04:19:32:LustreError: 31255:0:(obd_mount_server.c:1753:server_fill_super()) Unable to start osd on lustre-ost1/ost1: -114
04:19:32:LustreError: 31255:0:(obd_mount.c:1340:lustre_fill_super()) Unable to mount  (-114)
04:19:32:LustreError: 31163:0:(llog_osd.c:918:llog_osd_open()) ASSERTION( dt ) failed: 
04:19:32:LustreError: 31163:0:(llog_osd.c:918:llog_osd_open()) LBUG
04:19:32:Pid: 31163, comm: mount.lustre
04:19:32:
04:19:32:Call Trace:
04:19:32: [&amp;lt;ffffffffa05ef895&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
04:19:32: [&amp;lt;ffffffffa05efe97&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
04:19:32: [&amp;lt;ffffffffa071c574&amp;gt;] llog_osd_open+0x844/0xb30 [obdclass]
04:19:32: [&amp;lt;ffffffffa0600181&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
04:19:32: [&amp;lt;ffffffffa070a1da&amp;gt;] llog_open+0xba/0x2c0 [obdclass]
04:19:32: [&amp;lt;ffffffffa070dd01&amp;gt;] llog_backup+0x61/0x500 [obdclass]
04:19:32: [&amp;lt;ffffffff8128daa0&amp;gt;] ? sprintf+0x40/0x50
04:19:32: [&amp;lt;ffffffffa0ee581d&amp;gt;] mgc_process_log+0x12fd/0x1970 [mgc]
04:19:32: [&amp;lt;ffffffffa0edf260&amp;gt;] ? mgc_blocking_ast+0x0/0x810 [mgc]
04:19:32: [&amp;lt;ffffffffa096d710&amp;gt;] ? ldlm_completion_ast+0x0/0x930 [ptlrpc]
04:19:32: [&amp;lt;ffffffffa0ee6da8&amp;gt;] mgc_process_config+0x658/0x1210 [mgc]
04:19:32: [&amp;lt;ffffffffa0750a1f&amp;gt;] lustre_process_log+0x20f/0xad0 [obdclass]
04:19:32: [&amp;lt;ffffffffa0600181&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
04:19:32: [&amp;lt;ffffffffa05fa3a8&amp;gt;] ? libcfs_log_return+0x28/0x40 [libcfs]
04:19:32: [&amp;lt;ffffffffa07857e7&amp;gt;] server_start_targets+0x767/0x1af0 [obdclass]
04:19:32: [&amp;lt;ffffffffa05fa3a8&amp;gt;] ? libcfs_log_return+0x28/0x40 [libcfs]
04:19:32: [&amp;lt;ffffffffa0754246&amp;gt;] ? lustre_start_mgc+0x4b6/0x1e00 [obdclass]
04:19:32: [&amp;lt;ffffffffa0600181&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
04:19:32: [&amp;lt;ffffffffa074bfa0&amp;gt;] ? class_config_llog_handler+0x0/0x18c0 [obdclass]
04:19:32: [&amp;lt;ffffffffa078aa35&amp;gt;] server_fill_super+0xc95/0x1740 [obdclass]
04:19:32: [&amp;lt;ffffffffa05fa3a8&amp;gt;] ? libcfs_log_return+0x28/0x40 [libcfs]
04:19:32: [&amp;lt;ffffffffa0755d68&amp;gt;] lustre_fill_super+0x1d8/0x550 [obdclass]
04:19:32: [&amp;lt;ffffffffa0755b90&amp;gt;] ? lustre_fill_super+0x0/0x550 [obdclass]
04:19:32: [&amp;lt;ffffffff8118c5df&amp;gt;] get_sb_nodev+0x5f/0xa0
04:19:32: [&amp;lt;ffffffffa074d965&amp;gt;] lustre_get_sb+0x25/0x30 [obdclass]
04:19:32: [&amp;lt;ffffffff8118bc3b&amp;gt;] vfs_kern_mount+0x7b/0x1b0
04:19:32: [&amp;lt;ffffffff8118bde2&amp;gt;] do_kern_mount+0x52/0x130
04:19:32: [&amp;lt;ffffffff8119e9e2&amp;gt;] ? vfs_ioctl+0x22/0xa0
04:19:32: [&amp;lt;ffffffff811ad7bb&amp;gt;] do_mount+0x2fb/0x930
04:19:32: [&amp;lt;ffffffff811ade80&amp;gt;] sys_mount+0x90/0xe0
04:19:32: [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
04:19:32:
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="95224" author="bfaccini" created="Mon, 29 Sep 2014 18:12:57 +0000"  >&lt;p&gt;Master patch, to strengthen previous change for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5299&quot; title=&quot;osd_start() LBUG when doing parallel mount of the same target&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5299&quot;&gt;&lt;del&gt;LU-5299&lt;/del&gt;&lt;/a&gt; against the same issues/races during Server devices concurrent mounts/starts, is at &lt;a href=&quot;http://review.whamcloud.com/12114&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12114&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="95894" author="yujian" created="Wed, 8 Oct 2014 01:32:06 +0000"  >&lt;p&gt;One more instance on master branch:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/d687f1f4-4e81-11e4-ae94-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/d687f1f4-4e81-11e4-ae94-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="96272" author="jlevi" created="Mon, 13 Oct 2014 21:38:00 +0000"  >&lt;p&gt;Li Wei is verifying this fix.&lt;/p&gt;</comment>
                            <comment id="96778" author="yujian" created="Tue, 21 Oct 2014 01:35:03 +0000"  >&lt;p&gt;Here is the back-ported patch for Lustre b2_5 branch: &lt;a href=&quot;http://review.whamcloud.com/12353&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12353&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="96930" author="pjones" created="Tue, 21 Oct 2014 21:09:34 +0000"  >&lt;p&gt;This fix has landed on master. Is that all that is needed to correct this issue for 2.7?&lt;/p&gt;</comment>
                            <comment id="97539" author="bfaccini" created="Mon, 27 Oct 2014 11:04:31 +0000"  >&lt;p&gt;Yes Peter, I think this ticket can be closed &lt;/p&gt;</comment>
                            <comment id="97546" author="pjones" created="Mon, 27 Oct 2014 12:49:36 +0000"  >&lt;p&gt;ok thanks Bruno&lt;/p&gt;</comment>
                            <comment id="98830" author="yong.fan" created="Tue, 11 Nov 2014 00:05:55 +0000"  >&lt;p&gt;What plan for the &lt;a href=&quot;http://review.whamcloud.com/12353&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12353&lt;/a&gt; (b2_5)?&lt;br/&gt;
We hit the failure on b2_5:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/024235d2-683d-11e4-a449-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/024235d2-683d-11e4-a449-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="100272" author="gerrit" created="Mon, 1 Dec 2014 04:21:58 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/12353/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/12353/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5573&quot; title=&quot;Test timeout conf-sanity test_41c&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5573&quot;&gt;&lt;del&gt;LU-5573&lt;/del&gt;&lt;/a&gt; obdclass: strengthen against concurrent server mounts&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_5&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 684a7db576eb03ec2c74c89dabcef7991010ee11&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="26990">LU-5736</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="25445">LU-5299</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="29817">LU-6553</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwv4v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15545</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>