<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:26:40 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-16398] ost-pools: FAIL: remove sub-test dirs failed</title>
                <link>https://jira.whamcloud.com/browse/LU-16398</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for Cyril Bordage &amp;lt;cbordage@whamcloud.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.whamcloud.com/test_sets/6c25ddb2-8f54-4bfc-b517-7d8fa10a26e0&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.whamcloud.com/test_sets/6c25ddb2-8f54-4bfc-b517-7d8fa10a26e0&lt;/a&gt;&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[15276.445833] Lustre: lustre-MDT0000-mdc-ffff8e6f43d55000: Connection to lustre-MDT0000 (at 10.240.29.75@tcp) was lost; in progress operations using this service will wait for recovery to complete
[15282.525287] Lustre: 8050:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1670998319/real 1670998319]  req@00000000ed826699 x1752152989501440/t0(0) o400-&amp;gt;MGC10.240.29.75@tcp@10.240.29.75@tcp:26/25 lens 224/224 e 0 to 1 dl 1670998326 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:&apos;kworker/u4:2.0&apos;
[15282.529897] LustreError: 166-1: MGC10.240.29.75@tcp: Connection to MGS (at 10.240.29.75@tcp) was lost; in progress operations using this service will fail
[15282.535285] Lustre: Evicted from MGS (at 10.240.29.75@tcp) after server handle changed from 0xf1d807d0e20aa75a to 0xf1d807d0e20aeef7
[15282.537316] Lustre: MGC10.240.29.75@tcp: Connection restored to 10.240.29.75@tcp (at 10.240.29.75@tcp)
[15282.538776] Lustre: Skipped 1 previous similar message
[15282.540476] LustreError: 8047:0:(client.c:3253:ptlrpc_replay_interpret()) @@@ status 301, old was 0  req@000000000ae752ed x1752152938284544/t575525618194(575525618194) o101-&amp;gt;lustre-MDT0000-mdc-ffff8e6f43d55000@10.240.29.75@tcp:12/10 lens 784/608 e 0 to 0 dl 1670998333 ref 2 fl Interpret:RPQU/4/0 rc 301/301 job:&apos;lfs.0&apos;
[15282.544713] LustreError: 8047:0:(client.c:3253:ptlrpc_replay_interpret()) Skipped 4 previous similar messages
[15316.942119] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_30-1 2&amp;gt;/dev/null || echo foo
[15329.368031] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_30-2 2&amp;gt;/dev/null || echo foo
[15329.850726] Lustre: DEBUG MARKER: /usr/sbin/lctl mark == ost-pools test 31: OST pool spilling chained ========== 06:12:52 \(1670998372\)
[15330.287265] Lustre: DEBUG MARKER: == ost-pools test 31: OST pool spilling chained ========== 06:12:52 (1670998372)
[15336.189947] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-1 2&amp;gt;/dev/null || echo foo
[15341.427534] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-1 | grep -e lustre-OST0000_UUID | sort -u | tr &apos;\n&apos; &apos; &apos; 
[15346.697976] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-2 2&amp;gt;/dev/null || echo foo
[15351.904531] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-2 | grep -e lustre-OST0001_UUID | sort -u | tr &apos;\n&apos; &apos; &apos; 
[15357.089037] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-3 2&amp;gt;/dev/null || echo foo
[15362.271074] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-3 | grep -e lustre-OST0002_UUID | sort -u | tr &apos;\n&apos; &apos; &apos; 
[15367.429384] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-4 2&amp;gt;/dev/null || echo foo
[15372.625338] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-4 | grep -e lustre-OST0003_UUID | sort -u | tr &apos;\n&apos; &apos; &apos; 
[15396.253066] Lustre: lustre-OST0000-osc-ffff8e6f43d55000: disconnect after 23s idle
[15396.254390] Lustre: Skipped 7 previous similar messages
[15409.018878] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-1 2&amp;gt;/dev/null || echo foo
[15418.687254] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-2 2&amp;gt;/dev/null || echo foo
[15428.344557] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-3 2&amp;gt;/dev/null || echo foo
[15437.992415] Lustre: DEBUG MARKER: lctl get_param -n lov.lustre-*.pools.test_31-4 2&amp;gt;/dev/null || echo foo
[15438.453491] Lustre: DEBUG MARKER: /usr/sbin/lctl mark == ost-pools test complete, duration 3469 sec ============ 06:14:41 \(1670998481\)
[15438.911011] Lustre: DEBUG MARKER: == ost-pools test complete, duration 3469 sec ============ 06:14:41 (1670998481)
[15441.136862] LustreError: 668781:0:(file.c:242:ll_close_inode_openhandle()) lustre-clilmv-ffff8e6f43d55000: inode [0x280000407:0x1:0x0] mdc close failed: rc = -2
[15441.139233] LustreError: 668781:0:(file.c:242:ll_close_inode_openhandle()) Skipped 4 previous similar messages
[15441.703081] Lustre: DEBUG MARKER: /usr/sbin/lctl mark  ost-pools : @@@@@@ FAIL: remove sub-test dirs failed 
[15442.070656] Lustre: DEBUG MARKER: ost-pools : @@@@@@ FAIL: remove sub-test dirs failed
[15442.482095] Lustre: DEBUG MARKER: /usr/sbin/lctl dk &amp;gt; /autotest/autotest-2/2022-12-14/lustre-reviews_review-dne-part-6_91178_8_65e71c9b-ceca-42ee-bf2e-b293cbbbbbb5//ost-pools..debug_log.$(hostname -s).1670998485.log;
[15442.482095] 		dmesg &amp;gt; /autotest/autotest-2/2022-12-14/lustre-reviews_review-dne-part
[15452.573498] Lustre: lustre-MDT0000-mdc-ffff8e6f43d55000: Connection to lustre-MDT0000 (at 10.240.29.75@tcp) was lost; in progress operations using this service will wait for recovery to complete
[15452.583527] Lustre: Skipped 1 previous similar message
[15459.740896] Lustre: 8049:0:(client.c:2305:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1670998496/real 1670998496]  req@0000000075856cdc x1752152989699456/t0(0) o400-&amp;gt;MGC10.240.29.75@tcp@10.240.29.75@tcp:26/25 lens 224/224 e 0 to 1 dl 1670998503 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:&apos;kworker/u4:2.0&apos;
[15459.745591] LustreError: 166-1: MGC10.240.29.75@tcp: Connection to MGS (at 10.240.29.75@tcp) was lost; in progress operations using this service will fail
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="73627">LU-16398</key>
            <summary>ost-pools: FAIL: remove sub-test dirs failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="laisiyao">Lai Siyao</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Wed, 14 Dec 2022 10:24:19 +0000</created>
                <updated>Fri, 13 Jan 2023 02:02:05 +0000</updated>
                            <resolved>Fri, 13 Jan 2023 02:02:05 +0000</resolved>
                                                    <fixVersion>Lustre 2.16.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="356407" author="bzzz" created="Wed, 14 Dec 2022 13:59:21 +0000"  >&lt;p&gt;I think this is caused by &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16159&quot; title=&quot;remove update logs after recovery abort&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16159&quot;&gt;LU-16159&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="356472" author="gerrit" created="Wed, 14 Dec 2022 19:31:52 +0000"  >&lt;p&gt;&lt;del&gt;&quot;Andreas Dilger &amp;lt;adilger@whamcloud.com&amp;gt;&quot; uploaded a new patch:&lt;/del&gt; &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/49412&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/49412&lt;/a&gt;&lt;br/&gt;
&lt;del&gt;Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16398&quot; title=&quot;ost-pools: FAIL: remove sub-test dirs failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16398&quot;&gt;&lt;del&gt;LU-16398&lt;/del&gt;&lt;/a&gt; tests: exclude replay-single/100c until fixed&lt;/del&gt;&lt;br/&gt;
&lt;del&gt;Project: fs/lustre-release&lt;/del&gt;&lt;br/&gt;
&lt;del&gt;Branch: master&lt;/del&gt;&lt;br/&gt;
&lt;del&gt;Current Patch Set: 1&lt;/del&gt;&lt;br/&gt;
&lt;del&gt;Commit: 75112019f87c5feb88d5e9cabed960dd6e04217e&lt;/del&gt;&lt;/p&gt;</comment>
                            <comment id="358880" author="adilger" created="Fri, 13 Jan 2023 02:01:53 +0000"  >&lt;p&gt;I think this cleanup issue is fixed by patch &lt;a href=&quot;https://review.whamcloud.com/49335&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/49335&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-16335&quot; title=&quot;&amp;quot;lfs rm_entry&amp;quot; failed to remove broken directories&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-16335&quot;&gt;&lt;del&gt;LU-16335&lt;/del&gt;&lt;/a&gt; test: add fail_abort_cleanup()&lt;/tt&gt;&quot;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="72350">LU-16159</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="63090">LU-14474</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="65610">LU-14932</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="73364">LU-16335</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="63086">LU-15139</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i037yf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>