<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:31:34 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-10045] sanity-lfsck no sub tests failed</title>
                <link>https://jira.whamcloud.com/browse/LU-10045</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_sessions/ecf02eea-e0bb-48ce-a644-fed2036c2478&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sessions/ecf02eea-e0bb-48ce-a644-fed2036c2478&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From suite_log:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Starting ost1:   lustre-ost1/ost1 /mnt/lustre-ost1
CMD: onyx-35vm12 mkdir -p /mnt/lustre-ost1; mount -t lustre lustre-ost1/ost1 /mnt/lustre-ost1
onyx-35vm12: mount.lustre: mount lustre-ost1/ost1 at /mnt/lustre-ost1 failed: 
  Cannot send after transport endpoint shutdown
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The subsequent three test sets (sanityn, sanity-hsm, &amp;amp; sanity-lsnapshot) also run no subtests and have suite logs that end with this message:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;according to /etc/mtab lustre-mdt1/mdt1 is already mounted on /mnt/lustre-mds1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>onyx, full&lt;br/&gt;
servers: el7, zfs, branch master, v2.10.53.1, b3642&lt;br/&gt;
clients: el7, branch master, v2.10.53.1, b3642&lt;br/&gt;
</environment>
        <key id="48513">LU-10045</key>
            <summary>sanity-lfsck no sub tests failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="jcasper">James Casper</reporter>
                        <labels>
                    </labels>
                <created>Thu, 28 Sep 2017 22:24:45 +0000</created>
                <updated>Mon, 19 Mar 2018 21:02:31 +0000</updated>
                            <resolved>Wed, 31 Jan 2018 07:59:52 +0000</resolved>
                                    <version>Lustre 2.11.0</version>
                                    <fixVersion>Lustre 2.11.0</fixVersion>
                    <fixVersion>Lustre 2.10.4</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="210844" author="pjones" created="Wed, 11 Oct 2017 18:44:21 +0000"  >&lt;p&gt;Fan Yong&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="214603" author="yong.fan" created="Fri, 24 Nov 2017 15:36:24 +0000"  >&lt;p&gt;All the valuable information were shown in the bug description, no more detailed logs. Generally, at the beginning of sanity-lfsck, the test scripts will reformat and remount the whole system to cleanup the test environment. But according to the logs, some trouble happened during such process. According to the given logs, one possible case is that: after reformatted the MDT, it tried to mount the MDT, but the system /etc/mtab recorded that the MDT has already been mounted, it was wrong. So the MDT (and MGS) was not really mounted, then the subsequent mount failure happened on OSTs. As for why the /etc/mtab recorded wrong information, it is different to know. It may because some former test cases (in sanity or more earlier) side-effect.&lt;/p&gt;

&lt;p&gt;So unless we can reproduce the trouble with more detailed logs, it is difficult to locate the root reason.&lt;/p&gt;</comment>
                            <comment id="214922" author="yong.fan" created="Wed, 29 Nov 2017 12:05:20 +0000"  >&lt;p&gt;+1 on master:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/25e467c6-d4fa-11e7-a066-52540065bddc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/25e467c6-d4fa-11e7-a066-52540065bddc&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="215635" author="yong.fan" created="Fri, 8 Dec 2017 03:03:23 +0000"  >&lt;p&gt;The OST hit trouble when umount from former tests:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/26247ed8-d4fa-11e7-a066-52540065bddc/show_text&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/26247ed8-d4fa-11e7-a066-52540065bddc/show_text&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[11519.992843] Lustre: DEBUG MARKER: umount -d -f /mnt/lustre-ost1
[11521.139399] LustreError: 22266:0:(ldlm_resource.c:1094:ldlm_resource_complain()) lustre-MDT0000-lwp-OST0000: namespace resource [0x200000006:0x20000:0x0].0x0 (ffff88005598f6c0) refcount nonzero (1) after lock cleanup; forcing cleanup.
[11521.146796] LustreError: 22266:0:(ldlm_resource.c:1676:ldlm_resource_dump()) --- Resource: [0x200000006:0x20000:0x0].0x0 (ffff88005598f6c0) refcount = 2
[11521.153245] LustreError: 22266:0:(ldlm_resource.c:1679:ldlm_resource_dump()) Granted locks (in reverse order):
[11521.157006] LustreError: 22266:0:(ldlm_resource.c:1682:ldlm_resource_dump()) ### ### ns: lustre-MDT0000-lwp-OST0000 lock: ffff880056b8ca00/0xd8984bc587bddb59 lrc: 2/1,0 mode: CR/CR res: [0x200000006:0x20000:0x0].0x0 rrc: 3 type: PLN flags: 0x1106400000000 nid: local remote: 0x2a5bf4985ef188db expref: -99 pid: 21127 timeout: 0 lvb_type: 2
[11546.656082] Lustre: 15216:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1511948075/real 1511948075]  req@ffff880048344000 x1585380626416000/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.9.4.127@tcp:12/10 lens 520/544 e 0 to 1 dl 1511948100 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
[11546.663986] Lustre: 15216:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 2 previous similar messages
[11560.656077] LustreError: 166-1: MGC10.9.4.127@tcp: Connection to MGS (at 10.9.4.127@tcp) was lost; in progress operations using this service will fail
[11591.445188] LustreError: 22271:0:(client.c:1166:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff880056642a00 x1585380626416192/t0(0) o101-&amp;gt;lustre-MDT0000-lwp-OST0000@10.9.4.127@tcp:23/10 lens 456/496 e 0 to 0 dl 0 ref 2 fl Rpc:/0/ffffffff rc 0/-1
[11591.451032] LustreError: 22271:0:(qsd_reint.c:56:qsd_reint_completion()) lustre-OST0000: failed to enqueue global quota lock, glb fid:[0x200000006:0x1020000:0x0], rc:-5
[11591.661039] Lustre: 15216:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1511948129/real 1511948129]  req@ffff880056640600 x1585380626416160/t0(0) o250-&amp;gt;MGC10.9.4.127@tcp@10.9.4.127@tcp:26/25 lens 520/544 e 0 to 1 dl 1511948145 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
[11591.669399] Lustre: 15216:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 3 previous similar messages
[11665.661060] Lustre: 15216:0:(client.c:2113:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1511948194/real 1511948194]  req@ffff880056642700 x1585380626416256/t0(0) o38-&amp;gt;lustre-MDT0000-lwp-OST0001@10.9.4.127@tcp:12/10 lens 520/544 e 0 to 1 dl 1511948219 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
[11665.669739] Lustre: 15216:0:(client.c:2113:ptlrpc_expire_one_request()) Skipped 4 previous similar messages
[11711.461177] LustreError: 22280:0:(client.c:1166:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff880056642a00 x1585380626416352/t0(0) o101-&amp;gt;lustre-MDT0000-lwp-OST0000@10.9.4.127@tcp:23/10 lens 456/496 e 0 to 0 dl 0 ref 2 fl Rpc:/0/ffffffff rc 0/-1
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="217656" author="yong.fan" created="Sat, 6 Jan 2018 04:45:54 +0000"  >&lt;p&gt;The reason is described as the comment:&lt;br/&gt;
&lt;a href=&quot;https://jira.hpdd.intel.com/browse/LU-10406?focusedCommentId=217655&amp;amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-217655&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://jira.hpdd.intel.com/browse/LU-10406?focusedCommentId=217655&amp;amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-217655&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="217657" author="gerrit" created="Sat, 6 Jan 2018 06:39:54 +0000"  >&lt;p&gt;Fan Yong (fan.yong@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/30761&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30761&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10045&quot; title=&quot;sanity-lfsck no sub tests failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10045&quot;&gt;&lt;del&gt;LU-10045&lt;/del&gt;&lt;/a&gt; mgc: multiple try when register target&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d62d4132162c15cacc260e5d27abc2522f59d72d&lt;/p&gt;</comment>
                            <comment id="219495" author="gerrit" created="Wed, 31 Jan 2018 05:51:43 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/30761/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/30761/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10045&quot; title=&quot;sanity-lfsck no sub tests failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10045&quot;&gt;&lt;del&gt;LU-10045&lt;/del&gt;&lt;/a&gt; obdclass: multiple try when register target&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 79bfc74869e3f7b052874f4585399c5ba7f599e9&lt;/p&gt;</comment>
                            <comment id="219547" author="mdiep" created="Wed, 31 Jan 2018 15:22:39 +0000"  >&lt;p&gt;Landed for 2.11&lt;/p&gt;</comment>
                            <comment id="220980" author="gerrit" created="Wed, 14 Feb 2018 15:40:27 +0000"  >&lt;p&gt;Minh Diep (minh.diep@intel.com) uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/31301&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31301&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10045&quot; title=&quot;sanity-lfsck no sub tests failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10045&quot;&gt;&lt;del&gt;LU-10045&lt;/del&gt;&lt;/a&gt; obdclass: multiple try when register target&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 44bb3ee8961d13618e6d670d6e3005c2729a723e&lt;/p&gt;</comment>
                            <comment id="223981" author="gerrit" created="Mon, 19 Mar 2018 20:09:32 +0000"  >&lt;p&gt;John L. Hammond (john.hammond@intel.com) merged in patch &lt;a href=&quot;https://review.whamcloud.com/31301/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/31301/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10045&quot; title=&quot;sanity-lfsck no sub tests failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10045&quot;&gt;&lt;del&gt;LU-10045&lt;/del&gt;&lt;/a&gt; obdclass: multiple try when register target&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_10&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: aa99a7bb77cce480ff5753238d857a0eb797e5fe&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="49326">LU-10242</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="34191">LU-7690</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzzkz3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>