<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:50:18 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12175] sanity test 208 fails with &apos;lease broken over recovery&apos;</title>
                <link>https://jira.whamcloud.com/browse/LU-12175</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity test_208 fails in test 4: lease can sustain over recovery with &apos;lease broken over recovery&apos;. These failures started on 8 April 2019. So far, this test is only failing for DNE testing.&lt;/p&gt;

&lt;p&gt;Looking at the test_suite log for a recent failure, &lt;a href=&quot;https://testing.whamcloud.com/test_sets/03bcb85a-5ace-11e9-9720-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/03bcb85a-5ace-11e9-9720-52540065bddc&lt;/a&gt; , we see&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity test 208: Exclusive open =================================================================== 12:01:04 (1554811264)
==== test 1: verify get lease work
read lease(1) has applied.
==== test 2: verify lease can be broken by upcoming open
no lease applied.
==== test 3: verify lease can&apos;t be granted if an open already exists
multiop: cannot get READ lease, ext 0: Device or resource busy (16)
multiop: apply/unlock lease error: Device or resource busy
==== test 4: lease can sustain over recovery
Failing mds1 on trevis-40vm9
&#8230;
trevis-40vm6: trevis-40vm6.trevis.whamcloud.com: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid
trevis-40vm7: trevis-40vm7.trevis.whamcloud.com: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid
trevis-40vm7: CMD: trevis-40vm7.trevis.whamcloud.com lctl get_param -n at_max
trevis-40vm7: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec
trevis-40vm6: CMD: trevis-40vm6.trevis.whamcloud.com lctl get_param -n at_max
trevis-40vm6: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec
multiop: expect lease exists
no lease applied.
 sanity test_208: @@@@@@ FAIL: lease broken over recovery 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looking at the MDS1, 3 (vm9) dmesg log, we see some errors, &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 7777.539199] Lustre: DEBUG MARKER: == sanity test 208: Exclusive open =================================================================== 12:01:04 (1554811264)
[ 7780.799167] Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1&apos; &apos; /proc/mounts || true
[ 7781.106991] Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1
[ 7781.266880] Lustre: Failing over lustre-MDT0000
[ 7781.267747] Lustre: Skipped 1 previous similar message
[ 7781.457907] Lustre: lustre-MDT0000: Not available for connect from 10.9.3.150@tcp (stopping)
[ 7781.459369] Lustre: Skipped 19 previous similar messages
[ 7781.460162] LustreError: 11-0: lustre-MDT0000-osp-MDT0002: operation mds_statfs to node 0@lo failed: rc = -107
[ 7781.460167] Lustre: lustre-MDT0000-osp-MDT0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
[ 7781.460168] Lustre: Skipped 1 previous similar message
[ 7781.612729] LustreError: 137-5: lustre-MDT0000_UUID: not available for connect from 10.9.5.234@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
[ 7781.615597] LustreError: Skipped 5 previous similar messages
[ 7781.631586] Lustre: server umount lustre-MDT0000 complete
[ 7782.702609] Lustre: DEBUG MARKER: lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp;
lctl dl | grep &apos; ST &apos; || true
[ 7783.012452] Lustre: DEBUG MARKER: modprobe dm-flakey;
			 dmsetup targets | grep -q flakey
[ 7793.042085] Lustre: 17623:0:(client.c:2134:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1554811273/real 1554811273]  req@ffff8bf39b3e0900 x1630329757439344/t0(0) o400-&amp;gt;MGC10.9.3.149@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1554811280 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
[ 7793.046530] LustreError: 166-1: MGC10.9.3.149@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail
[ 7793.384485] Lustre: DEBUG MARKER: hostname
[ 7793.752701] Lustre: DEBUG MARKER: modprobe dm-flakey;
			 dmsetup targets | grep -q flakey
[ 7794.077024] Lustre: DEBUG MARKER: dmsetup status /dev/mapper/mds1_flakey &amp;gt;/dev/null 2&amp;gt;&amp;amp;1
[ 7794.380875] Lustre: DEBUG MARKER: dmsetup status /dev/mapper/mds1_flakey 2&amp;gt;&amp;amp;1
[ 7794.685614] Lustre: DEBUG MARKER: test -b /dev/mapper/mds1_flakey
[ 7794.986331] Lustre: DEBUG MARKER: e2label /dev/mapper/mds1_flakey
[ 7795.394940] Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1; mount -t lustre   /dev/mapper/mds1_flakey /mnt/lustre-mds1
[ 7795.730148] LDISKFS-fs (dm-6): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
[ 7795.756692] Lustre: osd-ldiskfs create tunables for lustre-MDT0000
[ 7798.648231] Lustre: MGS: Connection restored to bfee9086-166d-21e0-ae35-a479c8b23d83 (at 10.9.5.234@tcp)
[ 7798.650043] Lustre: Skipped 66 previous similar messages
[ 7799.066744] Lustre: Evicted from MGS (at 10.9.3.149@tcp) after server handle changed from 0x8eaf7145dd9f3b79 to 0x8eaf7145ddb23d36
[ 7799.411459] Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
[ 7799.801070] Lustre: 25291:0:(llog.c:606:llog_process_thread()) lustre-MDT0002-osp-MDT0000: invalid length 0 in llog [0x1:0x80000402:0x2]record for index 0/6
[ 7799.804286] LustreError: 25291:0:(lod_dev.c:434:lod_sub_recovery_thread()) lustre-MDT0002-osp-MDT0000 get update log failed: rc = -22
[ 7799.808197] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect
[ 7799.810304] Lustre: 25293:0:(ldlm_lib.c:2068:target_recovery_overseer()) recovery is aborted, evict exports in recovery
[ 7799.812067] Lustre: 25293:0:(ldlm_lib.c:2068:target_recovery_overseer()) Skipped 1 previous similar message
[ 7800.264478] Lustre: lustre-MDT0000: disconnecting 5 stale clients
[ 7800.289476] Lustre: lustre-MDT0000: Denying connection for new client lustre-MDT0001-mdtlov_UUID (at 10.9.3.150@tcp), waiting for 5 known clients (0 recovered, 0 in progress, and 5 evicted) already passed deadline 130:00
[ 7800.596879] Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n health_check
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Although &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8466&quot; title=&quot;sanity test_208: @@@@@@ FAIL: lease broken over recovery&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8466&quot;&gt;&lt;del&gt;LU-8466&lt;/del&gt;&lt;/a&gt; is for the same test failure with the same error message, there was &lt;br/&gt;
Here are links to logs for recent failures:&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/19290212-5b04-11e9-8e92-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/19290212-5b04-11e9-8e92-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/d9ae92b0-5a76-11e9-8e92-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/d9ae92b0-5a76-11e9-8e92-52540065bddc&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://testing.whamcloud.com/test_sets/d84fe1d8-5a3c-11e9-8e92-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/d84fe1d8-5a3c-11e9-8e92-52540065bddc&lt;/a&gt;&lt;/p&gt;
</description>
                <environment>DNE</environment>
        <key id="55382">LU-12175</key>
            <summary>sanity test 208 fails with &apos;lease broken over recovery&apos;</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="4" iconUrl="https://jira.whamcloud.com/images/icons/statuses/reopened.png" description="This issue was once resolved, but the resolution was deemed incorrect. From here issues are either marked assigned or resolved.">Reopened</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Tue, 9 Apr 2019 22:22:26 +0000</created>
                <updated>Wed, 10 Feb 2021 00:10:39 +0000</updated>
                                            <version>Lustre 2.13.0</version>
                    <version>Lustre 2.14.0</version>
                    <version>Lustre 2.12.4</version>
                    <version>Lustre 2.12.5</version>
                    <version>Lustre 2.12.6</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="245495" author="pfarrell" created="Tue, 9 Apr 2019 22:40:35 +0000"  >&lt;p&gt;Huh.&#160; Well this is the fatal bit - this llog issue and the evictions resulting in it are going to break this:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[ 7799.801070] Lustre: 25291:0:(llog.c:606:llog_process_thread()) lustre-MDT0002-osp-MDT0000: invalid length 0 in llog [0x1:0x80000402:0x2]record for index 0/6
[ 7799.804286] LustreError: 25291:0:(lod_dev.c:434:lod_sub_recovery_thread()) lustre-MDT0002-osp-MDT0000 get update log failed: rc = -22
[ 7799.808197] Lustre: lustre-MDT0000: in recovery but waiting for the first client to connect
[ 7799.810304] Lustre: 25293:0:(ldlm_lib.c:2068:target_recovery_overseer()) recovery is aborted, evict exports in recovery
[ 7799.812067] Lustre: 25293:0:(ldlm_lib.c:2068:target_recovery_overseer()) Skipped 1 previous similar message &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="245545" author="gerrit" created="Wed, 10 Apr 2019 21:42:46 +0000"  >&lt;p&gt;James Simmons (uja.ornl@yahoo.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34632&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34632&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; ldlm: debug patch&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 23ab13bd556f81c940d6a78d7cf07856d45d4aae&lt;/p&gt;</comment>
                            <comment id="245546" author="simmonsja" created="Wed, 10 Apr 2019 21:47:43 +0000"  >&lt;p&gt;From what I can tell this bug predates the hrtimer patch that landed but the hrtimer really brings out this bug. The strange thing is the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11771&quot; title=&quot;bad output in target_handle_reconnect: Recovery already passed deadline 71578:57&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11771&quot;&gt;&lt;del&gt;LU-11771&lt;/del&gt;&lt;/a&gt; patch on 2.12 LTS doesn&apos;t produce the same issues. I can&apos;t reproduce this locally at all so I pushed a debug patch. I expect that the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11771&quot; title=&quot;bad output in target_handle_reconnect: Recovery already passed deadline 71578:57&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11771&quot;&gt;&lt;del&gt;LU-11771&lt;/del&gt;&lt;/a&gt; patch will be reverted but that will only hide what the real problem is.&lt;/p&gt;</comment>
                            <comment id="245552" author="simmonsja" created="Thu, 11 Apr 2019 02:08:33 +0000"  >&lt;p&gt;I would not close this out. Their is a real bug hidden in the code not dealt with yet.&lt;/p&gt;</comment>
                            <comment id="245765" author="simmonsja" created="Mon, 15 Apr 2019 14:25:34 +0000"  >&lt;p&gt;Still seeing this, just not as often.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://testing.whamcloud.com/test_sessions/7599c2ef-25c8-4479-8e24-4b1fb966502e&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/7599c2ef-25c8-4479-8e24-4b1fb966502e&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="245780" author="gerrit" created="Mon, 15 Apr 2019 17:49:40 +0000"  >&lt;p&gt;Patrick Farrell (pfarrell@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34664&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34664&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; tests: Revert &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11636&quot; title=&quot;t-f test_mkdir() does not support interop with non DNEII servers&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11636&quot;&gt;&lt;del&gt;LU-11636&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d3bfa57c88c17492c3c985e8079a07ce6750b678&lt;/p&gt;</comment>
                            <comment id="245784" author="gerrit" created="Mon, 15 Apr 2019 18:13:35 +0000"  >&lt;p&gt;Patrick Farrell (pfarrell@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34666&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34666&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; tests: Revert &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11566&quot; title=&quot;sanity test_60aa: llog_print_cb()) not enough space for print log records&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11566&quot;&gt;&lt;del&gt;LU-11566&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 41155e7f9395a098c368a1acbdda06d8334e49eb&lt;/p&gt;</comment>
                            <comment id="245931" author="gerrit" created="Wed, 17 Apr 2019 15:39:10 +0000"  >&lt;p&gt;Patrick Farrell (pfarrell@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34697&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34697&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; tests: Change to test_mkdir&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 9409532dfe667d672a1bb002d9e4fc6555c308da&lt;/p&gt;</comment>
                            <comment id="245937" author="gerrit" created="Wed, 17 Apr 2019 17:01:43 +0000"  >&lt;p&gt;Patrick Farrell (pfarrell@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34699&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34699&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; tests: Partial revert of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11636&quot; title=&quot;t-f test_mkdir() does not support interop with non DNEII servers&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11636&quot;&gt;&lt;del&gt;LU-11636&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d893f34824c5c5472653b19f557d079b66266a06&lt;/p&gt;</comment>
                            <comment id="245985" author="arshad512" created="Thu, 18 Apr 2019 12:29:27 +0000"  >&lt;p&gt;Seen under &lt;a href=&quot;https://testing.whamcloud.com/test_sessions/8c83753a-320b-42b6-aad1-59b44e9c2dde&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sessions/8c83753a-320b-42b6-aad1-59b44e9c2dde&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="246005" author="gerrit" created="Thu, 18 Apr 2019 16:43:16 +0000"  >&lt;p&gt;Patrick Farrell (pfarrell@whamcloud.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/34705&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34705&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; tests: Partial revert of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11636&quot; title=&quot;t-f test_mkdir() does not support interop with non DNEII servers&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11636&quot;&gt;&lt;del&gt;LU-11636&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 379bd66d6849643e65bea6b688780e4a2e9980b8&lt;/p&gt;</comment>
                            <comment id="246037" author="gerrit" created="Thu, 18 Apr 2019 22:03:43 +0000"  >&lt;p&gt;Oleg Drokin (green@whamcloud.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/34705/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/34705/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12175&quot; title=&quot;sanity test 208 fails with &amp;#39;lease broken over recovery&amp;#39;&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12175&quot;&gt;LU-12175&lt;/a&gt; tests: Partial revert of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-11636&quot; title=&quot;t-f test_mkdir() does not support interop with non DNEII servers&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-11636&quot;&gt;&lt;del&gt;LU-11636&lt;/del&gt;&lt;/a&gt;&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 669bcc3fc9ddf12f4c7aee0b347deeb1dd269347&lt;/p&gt;</comment>
                            <comment id="246043" author="pjones" created="Thu, 18 Apr 2019 22:35:17 +0000"  >&lt;p&gt;Landed for 2.13&lt;/p&gt;</comment>
                            <comment id="258797" author="jamesanunez" created="Mon, 25 Nov 2019 23:42:59 +0000"  >&lt;p&gt;I think we&apos;re seeing the same issue on master (b2_14). Please see &lt;a href=&quot;https://testing.whamcloud.com/test_sets/5c065f78-0fd4-11ea-98f1-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/5c065f78-0fd4-11ea-98f1-52540065bddc&lt;/a&gt; for logs.&lt;/p&gt;</comment>
                            <comment id="263139" author="bzzz" created="Wed, 12 Feb 2020 09:04:05 +0000"  >&lt;p&gt;208 timeouts very often in my testing:&lt;br/&gt;
Lustre: lustre-MDT0000 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 12. Is it stuck?&lt;br/&gt;
Lustre: lustre-MDT0000: UNLINKED 00000000ce791f59 lustre-MDT0001-mdtlov_UUID 192.168.122.146@tcp 1 (1 0 0) 1 0 0 0:           (null)  42949709990 stale:0&lt;br/&gt;
Lustre: lustre-MDT0000: UNLINKED 00000000daedb981 lustre-MDT0000-lwp-MDT0001_UUID 192.168.122.146@tcp 1 (1 0 0) 1 0 0 0:           (null)  0 stale:0&lt;br/&gt;
Lustre: lustre-MDT0000: UNLINKED 000000007e196a79 lustre-MDT0000-lwp-OST0000_UUID 192.168.122.146@tcp 1 (1 0 0) 1 0 0 0:           (null)  0 stale:0&lt;br/&gt;
Lustre: lustre-MDT0000: UNLINKED 00000000923c2bde e30080ae-dab6-4 192.168.122.146@tcp 1 (1 0 0) 1 0 0 0:           (null)  42949710030 stale:0&lt;br/&gt;
Lustre: lustre-MDT0000: UNLINKED 0000000022d6877c lustre-MDT0000-lwp-MDT0000_UUID 192.168.122.146@tcp 1 (1 0 0) 1 0 0 0:           (null)  0 stale:0&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="291558" author="jamesanunez" created="Tue, 9 Feb 2021 21:57:38 +0000"  >&lt;p&gt;I think we&apos;re seeing a similar issue in replay-dual test 23d. Looking at the failure at &lt;a href=&quot;https://testing.whamcloud.com/test_sets/17948bab-e647-4f32-874a-0fe07a464353&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/17948bab-e647-4f32-874a-0fe07a464353&lt;/a&gt; for DNE/ZFS testing, we see the following in the MDT2,4 (vm9) console log&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[11816.199115] Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-66vm9.trevis.whamcloud.com: executing set_default_debug -1 all 4
[11816.837718] Lustre: 180077:0:(llog.c:620:llog_process_thread()) lustre-MDT0000-osp-MDT0001: invalid length 0 in llog [0x1:0x401:0x2]record for index 0/9
[11816.841991] LustreError: 180077:0:(lod_dev.c:425:lod_sub_recovery_thread()) lustre-MDT0000-osp-MDT0001 get update log failed: rc = -22
[11817.063950] Lustre: DEBUG MARKER: trevis-66vm9.trevis.whamcloud.com: executing set_default_debug -1 all 4
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="55373">LU-12171</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="55383">LU-12176</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="38566">LU-8466</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="53953">LU-11636</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="55455">LU-12210</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00ep3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>