<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:08:57 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7442] conf-sanity test_41c: @@@@@@ FAIL: unexpected concurent MDT mounts rc=17 rc2=0 </title>
                <link>https://jira.whamcloud.com/browse/LU-7442</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;modules unloaded.&lt;br/&gt;
error: set_param: /proc/&lt;/p&gt;
{fs,sys}/{lnet,lustre}/fail_loc: Found no match&lt;br/&gt;
Starting mds1: -o rw,user_xattr  /dev/vdb /mnt/mds1&lt;br/&gt;
mount.lustre: set /sys/block/vdb/queue/max_sectors_kb to 2147483647&lt;br/&gt;
&lt;br/&gt;
error: set_param: /proc/{fs,sys}
&lt;p&gt;/&lt;/p&gt;
{lnet,lustre}
&lt;p&gt;/fail_loc: Found no match&lt;br/&gt;
Starting mds1: -o rw,user_xattr  /dev/vdb /mnt/mds1&lt;br/&gt;
mount.lustre: set /sys/block/vdb/queue/max_sectors_kb to 2147483647&lt;/p&gt;

&lt;p&gt;mount.lustre: mount /dev/vdb at /mnt/mds1 failed: File exists&lt;br/&gt;
Start of /dev/vdb on mds1 failed 17&lt;br/&gt;
Started lustre-MDT0000&lt;br/&gt;
Stopping /mnt/mds1 (opts:-f) on fre819&lt;br/&gt;
 conf-sanity test_41c: @@@@@@ FAIL: unexpected concurent MDT mounts result, rc=17 rc2=0 &lt;/p&gt;</description>
                <environment>single node setup</environment>
        <key id="33171">LU-7442</key>
            <summary>conf-sanity test_41c: @@@@@@ FAIL: unexpected concurent MDT mounts rc=17 rc2=0 </summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="hemaharish">hemaharish</reporter>
                        <labels>
                    </labels>
                <created>Tue, 17 Nov 2015 10:52:18 +0000</created>
                <updated>Mon, 28 Nov 2016 20:09:04 +0000</updated>
                            <resolved>Sat, 6 Aug 2016 12:58:37 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                    <fixVersion>Lustre 2.9.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="133687" author="bfaccini" created="Tue, 17 Nov 2015 11:58:22 +0000"  >&lt;p&gt;Looks like conf-sanity/test_41c needs some fixes/cleanup, as for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5921&quot; title=&quot;conf-sanity test_41c: unexpected concurent OST mounts result, rc=0 rc2=1&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5921&quot;&gt;&lt;del&gt;LU-5921&lt;/del&gt;&lt;/a&gt; which is already assigned to me.&lt;br/&gt;
In this particular ticket&apos;s case, it seems that the modules unload prevented the fail_loc setting for the test ...&lt;br/&gt;
I will cook a patch soon.&lt;/p&gt;</comment>
                            <comment id="134048" author="hemaharish" created="Fri, 20 Nov 2015 05:21:29 +0000"  >&lt;p&gt;Hi,&lt;br/&gt;
We worked on this patch. Call to &quot;load_modules&quot; fixed the issue, the test case was pass, will land the patch.&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
== conf-sanity test 41c: concurrent mounts of MDT/OST should all fail but one == 10:42:24 (1447996344)
umount lustre on /mnt/lustre.....
stop ost1 service on centos6.6-Upstream-landing
stop mds service on centos6.6-Upstream-landing
modules unloaded.
Loading modules from /home/hema/xyratex/code/lustre-wc-rel/lustre/tests/..
detected 1 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
debug=-1
subsystem_debug=all -lnet -lnd -pinger
gss/krb5 is not supported
quota/lquota options: &lt;span class=&quot;code-quote&quot;&gt;&apos;hash_lqs_cur_bits=3&apos;&lt;/span&gt;
fail_loc=0x703
Starting mds1:   -o loop /tmp/lustre-mdt1 /mnt/mds1
fail_loc=0x0
Starting mds1:   -o loop /tmp/lustre-mdt1 /mnt/mds1
mount.lustre: mount /dev/loop1 at /mnt/mds1 failed: Operation already in progress
The target service is already running. (/dev/loop1)
Start of /tmp/lustre-mdt1 on mds1 failed 114
Started lustre-MDT0000
1st MDT start succeed
2nd MDT start failed with EALREADY
fail_loc=0x703
Starting ost1:   -o loop /tmp/lustre-ost1 /mnt/ost1
fail_loc=0x0
Starting ost1:   -o loop /tmp/lustre-ost1 /mnt/ost1
mount.lustre: mount /dev/loop2 at /mnt/ost1 failed: Operation already in progress
The target service is already running. (/dev/loop2)
Start of /tmp/lustre-ost1 on ost1 failed 114
Started lustre-OST0000
1st OST start succeed
2nd OST start failed with EALREADY
stop mds service on centos6.6-Upstream-landing
Stopping /mnt/mds1 (opts:-f) on centos6.6-Upstream-landing
Stopping /mnt/ost1 (opts:-f) on centos6.6-Upstream-landing
start mds service on centos6.6-Upstream-landing
Starting mds1:   -o loop /tmp/lustre-mdt1 /mnt/mds1
Started lustre-MDT0000
start ost1 service on centos6.6-Upstream-landing
Starting ost1:   -o loop /tmp/lustre-ost1 /mnt/ost1
Started lustre-OST0000
mount lustre on /mnt/lustre.....
Starting client: centos6.6-Upstream-landing:  -o user_xattr,flock centos6.6-Upstream-landing@tcp:/lustre /mnt/lustre
setup single mount lustre success
umount lustre on /mnt/lustre.....
Stopping client centos6.6-Upstream-landing /mnt/lustre (opts:)
stop ost1 service on centos6.6-Upstream-landing
Stopping /mnt/ost1 (opts:-f) on centos6.6-Upstream-landing
stop mds service on centos6.6-Upstream-landing
Stopping /mnt/mds1 (opts:-f) on centos6.6-Upstream-landing
modules unloaded.
Resetting fail_loc on all nodes...done.
PASS 41c (78s)

&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="134053" author="gerrit" created="Fri, 20 Nov 2015 06:19:22 +0000"  >&lt;p&gt;HemaHarish (hema.yarramilli@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/17301&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17301&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7442&quot; title=&quot;conf-sanity test_41c: @@@@@@ FAIL: unexpected concurent MDT mounts rc=17 rc2=0 &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7442&quot;&gt;&lt;del&gt;LU-7442&lt;/del&gt;&lt;/a&gt; test: Unexpected concurent MDT mounts in conf-sanity 41c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 3ac88c6a184ca0db1fda8368ec1e4590cd446ffe&lt;/p&gt;</comment>
                            <comment id="134057" author="bfaccini" created="Fri, 20 Nov 2015 09:42:21 +0000"  >&lt;p&gt;The reason of the failure (in fact the non-permanent failure!) is still a bit mysterious for me, but patch re-loading of modules after cleanup is harmless and will clear any special cases...&lt;br/&gt;
Just for my information, was this failure permanent during you testing ?&lt;/p&gt;
</comment>
                            <comment id="134181" author="hemaharish" created="Mon, 23 Nov 2015 03:23:18 +0000"  >&lt;p&gt;Yes, failure was permanent on single node setup without patch.&lt;/p&gt;</comment>
                            <comment id="158893" author="bfaccini" created="Thu, 14 Jul 2016 20:17:14 +0000"  >&lt;p&gt;hemaharish,&lt;br/&gt;
Sorry to be late on this, but am I right if I think that you can encounter this problem solid (I mean the missing load_modules) when you run conf-sanity/test_41c as a single test run and not as part as full conf-sanity test suite ??&lt;/p&gt;</comment>
                            <comment id="158919" author="yong.fan" created="Fri, 15 Jul 2016 01:38:04 +0000"  >&lt;p&gt;Thanks Bruno. I have rebased my patch against the patch &lt;a href=&quot;http://review.whamcloud.com/#/c/17427&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#/c/17427&lt;/a&gt; to resolve conf-sanity test_41c failure.&lt;/p&gt;</comment>
                            <comment id="161018" author="gerrit" created="Sat, 6 Aug 2016 06:24:07 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/17301/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/17301/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7442&quot; title=&quot;conf-sanity test_41c: @@@@@@ FAIL: unexpected concurent MDT mounts rc=17 rc2=0 &quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7442&quot;&gt;&lt;del&gt;LU-7442&lt;/del&gt;&lt;/a&gt; tests: Load modules on MDS/OSS in conf-sanity test_41c&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 3973c51b0ba246fb9904235206e6b9269d670a51&lt;/p&gt;</comment>
                            <comment id="161043" author="pjones" created="Sat, 6 Aug 2016 12:58:37 +0000"  >&lt;p&gt;Landed for 2.9&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="27594">LU-5921</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxta7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>