<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:08:05 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7344] sanity test_154g test30 fail on cleanup: FAIL: test_154g failed with 1</title>
                <link>https://jira.whamcloud.com/browse/LU-7344</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;sanity test 154g subtest 30 fails on removing links the test created. Logs are at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/9608c94e-7c22-11e5-9851-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/9608c94e-7c22-11e5-9851-5254006e85c2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the test_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Finishing test test30 at 1445869186
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0330&apos;: Input/output error
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0329&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0254&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0678&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0986&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0309&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0608&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0286&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0479&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0231&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0798&apos;: Cannot send after transport endpoint shutdown
rm: cannot remove `/mnt/lustre/d154g.sanity/llapi_fid_test_name_9585766/link0824&apos;: Cannot send after transport endpoint shutdown
llapi_fid_test: llapi_fid_test.c:98: cleanup: assertion &apos;WEXITSTATUS(rc) == 0&apos; failed: rm command returned 1
 sanity test_154g: @@@@@@ FAIL: test_154g failed with 1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;From the client console logs, the client is having connection problems:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;14:19:55:LustreError: 11-0: lustre-MDT0000-mdc-ffff880077e11c00: operation ldlm_enqueue to node 10.1.5.239@tcp failed: rc = -107
14:19:55:Lustre: lustre-MDT0000-mdc-ffff880077e11c00: Connection to lustre-MDT0000 (at 10.1.5.239@tcp) was lost; in progress operations using this service will wait for recovery to complete
14:19:55:LustreError: 167-0: lustre-MDT0000-mdc-ffff880077e11c00: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
14:19:55:LustreError: 23082:0:(mdc_locks.c:1176:mdc_intent_getattr_async_interpret()) ldlm_cli_enqueue_fini: -5
14:19:55:LustreError: 23082:0:(mdc_locks.c:1176:mdc_intent_getattr_async_interpret()) Skipped 4 previous similar messages
14:19:55:Lustre: lustre-MDT0000-mdc-ffff880077e11c00: Connection restored to 10.1.5.239@tcp (at 10.1.5.239@tcp)
14:19:55:Lustre: DEBUG MARKER: /usr/sbin/lctl mark  sanity test_154g: @@@@@@ FAIL: test_154g failed with 1 
14:19:55:Lustre: DEBUG MARKER: sanity test_154g: @@@@@@ FAIL: test_154g failed with 1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We&#8217;ve seen this failure a couple of times this month. Logs are at &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/8b07cd46-70a2-11e5-9bcc-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/8b07cd46-70a2-11e5-9bcc-5254006e85c2&lt;/a&gt; and&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/957630d2-75a8-11e5-bac5-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/957630d2-75a8-11e5-bac5-5254006e85c2&lt;/a&gt;. In the last client console log, we see an addition error message about nonzero refcount:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;09:23:05:LustreError: 11-0: lustre-MDT0000-mdc-ffff88007daeb800: operation ldlm_enqueue to node 10.1.4.105@tcp failed: rc = -107
09:23:05:Lustre: lustre-MDT0000-mdc-ffff88007daeb800: Connection to lustre-MDT0000 (at 10.1.4.105@tcp) was lost; in progress operations using this service will wait for recovery to complete
09:23:05:LustreError: 167-0: lustre-MDT0000-mdc-ffff88007daeb800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
09:23:05:LustreError: 23311:0:(mdc_locks.c:1176:mdc_intent_getattr_async_interpret()) ldlm_cli_enqueue_fini: -5
09:23:05:LustreError: 12432:0:(ldlm_resource.c:887:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff88007daeb800: namespace resource [0x200004282:0x82c:0x0].0x0 (ffff88007c0a72c0) refcount nonzero (1) after lock cleanup; forcing cleanup.
09:23:05:LustreError: 12432:0:(ldlm_resource.c:1502:ldlm_resource_dump()) --- Resource: [0x200004282:0x82c:0x0].0x0 (ffff88007c0a72c0) refcount = 2
09:23:05:Lustre: lustre-MDT0000-mdc-ffff88007daeb800: Connection restored to 10.1.4.105@tcp (at 10.1.4.105@tcp)
09:23:05:Lustre: DEBUG MARKER: /usr/sbin/lctl mark  sanity test_154g: @@@@@@ FAIL: test_154g failed with 1 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>autotest</environment>
        <key id="32846">LU-7344</key>
            <summary>sanity test_154g test30 fail on cleanup: FAIL: test_154g failed with 1</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="jamesanunez">James Nunez</reporter>
                        <labels>
                    </labels>
                <created>Tue, 27 Oct 2015 15:53:49 +0000</created>
                <updated>Tue, 11 Apr 2017 14:04:21 +0000</updated>
                                            <version>Lustre 2.8.0</version>
                    <version>Lustre 2.10.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                            <comments>
                            <comment id="137427" author="standan" created="Thu, 24 Dec 2015 19:31:05 +0000"  >&lt;p&gt;Another instance found for the following config:&lt;br/&gt;
Server: 2.7.1 , b2_7_fe/34&lt;br/&gt;
Client: Master, build# 3276, RHEL 6.7&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/610eff92-a602-11e5-a14c-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/610eff92-a602-11e5-a14c-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxrjz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>