<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:32:23 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3264] recovery-*-scale tests failed with FSTYPE=zfs and FAILURE_MODE=HARD</title>
                <link>https://jira.whamcloud.com/browse/LU-3264</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running recovery-*-scale tests with FSTYPE=zfs and FAILURE_MODE=HARD under failover configuration, the tests failed as follows:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Failing mds1 on wtm-9vm3
+ pm -h powerman --off wtm-9vm3
Command completed successfully
waiting ! ping -w 3 -c 1 wtm-9vm3, 4 secs left ...
waiting ! ping -w 3 -c 1 wtm-9vm3, 3 secs left ...
waiting ! ping -w 3 -c 1 wtm-9vm3, 2 secs left ...
waiting ! ping -w 3 -c 1 wtm-9vm3, 1 secs left ...
waiting for wtm-9vm3 to fail attempts=3
+ pm -h powerman --off wtm-9vm3
Command completed successfully
reboot facets: mds1
+ pm -h powerman --on wtm-9vm3
Command completed successfully
Failover mds1 to wtm-9vm7
04:28:49 (1367234929) waiting for wtm-9vm7 network 900 secs ...
04:28:49 (1367234929) network interface is UP
CMD: wtm-9vm7 hostname
mount facets: mds1
Starting mds1:   lustre-mdt1/mdt1 /mnt/mds1
CMD: wtm-9vm7 mkdir -p /mnt/mds1; mount -t lustre   		                   lustre-mdt1/mdt1 /mnt/mds1
wtm-9vm7: mount.lustre: lustre-mdt1/mdt1 has not been formatted with mkfs.lustre or the backend filesystem type is not supported by this tool
Start of lustre-mdt1/mdt1 on mds1 failed 19
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/ac7cbc10-b0e3-11e2-b2c4-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/ac7cbc10-b0e3-11e2-b2c4-52540035b04c&lt;/a&gt;&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
FSTYPE=zfs&lt;br/&gt;
FAILURE_MODE=HARD&lt;br/&gt;
</environment>
        <key id="18686">LU-3264</key>
            <summary>recovery-*-scale tests failed with FSTYPE=zfs and FAILURE_MODE=HARD</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yujian">Jian Yu</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                            <label>zfs</label>
                    </labels>
                <created>Thu, 2 May 2013 10:20:10 +0000</created>
                <updated>Thu, 15 Aug 2013 06:30:11 +0000</updated>
                            <resolved>Tue, 23 Jul 2013 18:25:32 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.1</fixVersion>
                    <fixVersion>Lustre 2.5.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>12</watches>
                                                                            <comments>
                            <comment id="57508" author="liwei" created="Thu, 2 May 2013 12:03:16 +0000"  >&lt;p&gt;(CC&apos;ed Brian.  How does LLNL implement failovers with ZFS?)&lt;/p&gt;

&lt;p&gt;The pool lustre-mdt1 needs to be imported via &quot;zpool import -f ...&quot; on wtm-9vm7.  The tricky part, however, is how to prevent wtm-9vm3 from playing with the pool after rebooting.  It might be doable by never caching Lustre pool configurations (&quot;-o cachefile=none&quot; at creation time), so that none of them will be automatically imported anywhere.  It would be great if two nodes and a shared device are available to experiments.&lt;/p&gt;</comment>
                            <comment id="57520" author="yujian" created="Thu, 2 May 2013 14:14:20 +0000"  >&lt;blockquote&gt;&lt;p&gt;It would be great if two nodes and a shared device are available to experiments.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Let me setup the test environment and do some experiments.&lt;/p&gt;</comment>
                            <comment id="57547" author="adilger" created="Thu, 2 May 2013 17:59:08 +0000"  >&lt;p&gt;This kind of problem is why we would want to have MMP for ZFS, but that hasn&apos;t been developed yet.  However, for the sake of this bug, we just need to fix the ZFS import problem so that our automated testing scripts work.  &lt;/p&gt;</comment>
                            <comment id="57591" author="behlendorf" created="Thu, 2 May 2013 22:24:41 +0000"  >&lt;p&gt;Until we have MMP for ZFS we&apos;ve resolved this issue by delegating full authority for starting/stopping servers to heartbeat.  See the lustre/scripts/Lustre.ha_v2 resource scripts ZPOOL_IMPORT_ARGS=&apos;-f&apos; line which is used to always force importing the pool.  We also boot all of our nodes diskless so they never have a persistent cache file and thus never get automatically imported.  I admit it&apos;s a stop gap until we have real MMP, but in practice it&apos;s been working thus far.  &lt;/p&gt;</comment>
                            <comment id="57598" author="liwei" created="Fri, 3 May 2013 01:26:00 +0000"  >&lt;p&gt;Thanks, Brian.  There&apos;s little info like this on the web.  (Perhaps it would be worthwhile to add an FAQ entry on zfsonlinux.org sometime.)&lt;/p&gt;</comment>
                            <comment id="57655" author="yujian" created="Fri, 3 May 2013 17:06:34 +0000"  >&lt;p&gt;Patch for master branch is in &lt;a href=&quot;http://review.whamcloud.com/6258&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6258&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="58427" author="yujian" created="Tue, 14 May 2013 06:08:03 +0000"  >&lt;p&gt;Patch was landed on master branch.&lt;/p&gt;</comment>
                            <comment id="58646" author="bzzz" created="Thu, 16 May 2013 11:14:46 +0000"  >&lt;p&gt;can you confirm the patch does work on a local setup?&lt;/p&gt;</comment>
                            <comment id="58647" author="bzzz" created="Thu, 16 May 2013 11:16:44 +0000"  >&lt;p&gt;with REFORMAT=y FSTYPE=zfs sh llmount.sh -v I&apos;m getting:&lt;/p&gt;

&lt;p&gt;Format mds1: lustre-mdt1/mdt1&lt;br/&gt;
CMD: centos grep -c /mnt/mds1&apos; &apos; /proc/mounts&lt;br/&gt;
CMD: centos lsmod | grep lnet &amp;gt; /dev/null &amp;amp;&amp;amp; lctl dl | grep &apos; ST &apos;&lt;br/&gt;
CMD: centos ! zpool list -H lustre-mdt1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||&lt;br/&gt;
			zpool export  lustre-mdt1&lt;br/&gt;
CMD: centos /work/lustre/head1/lustre/tests/../utils/mkfs.lustre --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=lov.stripesize=1048576 --param=lov.stripecount=0 --param=mdt.identity_upcall=/work/lustre/head1/lustre/tests/../utils/l_getidentity --backfstype=zfs --device-size=200000 --reformat lustre-mdt1/mdt1 /tmp/lustre-mdt1&lt;/p&gt;

&lt;p&gt;   Permanent disk data:&lt;br/&gt;
Target:     lustre:MDT0000&lt;br/&gt;
Index:      0&lt;br/&gt;
Lustre FS:  lustre&lt;br/&gt;
Mount type: zfs&lt;br/&gt;
Flags:      0x65&lt;br/&gt;
              (MDT MGS first_time update )&lt;br/&gt;
Persistent mount opts: &lt;br/&gt;
Parameters: sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0 mdt.identity_upcall=/work/lustre/head1/lustre/tests/../utils/l_getidentity&lt;/p&gt;

&lt;p&gt;mkfs_cmd = zpool create -f -O canmount=off lustre-mdt1 /tmp/lustre-mdt1&lt;br/&gt;
mkfs_cmd = zfs create -o canmount=off -o xattr=sa lustre-mdt1/mdt1&lt;br/&gt;
Writing lustre-mdt1/mdt1 properties&lt;br/&gt;
  lustre:version=1&lt;br/&gt;
  lustre:flags=101&lt;br/&gt;
  lustre:index=0&lt;br/&gt;
  lustre:fsname=lustre&lt;br/&gt;
  lustre:svname=lustre:MDT0000&lt;br/&gt;
  lustre:sys.timeout=20&lt;br/&gt;
  lustre:lov.stripesize=1048576&lt;br/&gt;
  lustre:lov.stripecount=0&lt;br/&gt;
  lustre:mdt.identity_upcall=/work/lustre/head1/lustre/tests/../utils/l_getidentity&lt;br/&gt;
CMD: centos zpool set cachefile=none lustre-mdt1&lt;br/&gt;
CMD: centos ! zpool list -H lustre-mdt1 &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 ||&lt;br/&gt;
			zpool export  lustre-mdt1&lt;br/&gt;
...&lt;br/&gt;
Loading modules from /work/lustre/head1/lustre/tests/..&lt;br/&gt;
detected 2 online CPUs by sysfs&lt;br/&gt;
Force libcfs to create 2 CPU partitions&lt;br/&gt;
debug=vfstrace rpctrace dlmtrace neterror ha config ioctl super&lt;br/&gt;
subsystem_debug=all -lnet -lnd -pinger&lt;br/&gt;
gss/krb5 is not supported&lt;br/&gt;
Setup mgs, mdt, osts&lt;br/&gt;
CMD: centos mkdir -p /mnt/mds1&lt;br/&gt;
CMD: centos zpool import -f -o cachefile=none lustre-mdt1&lt;br/&gt;
cannot import &apos;lustre-mdt1&apos;: no such pool available&lt;/p&gt;
</comment>
                            <comment id="58653" author="yujian" created="Thu, 16 May 2013 13:57:58 +0000"  >&lt;blockquote&gt;&lt;p&gt;cannot import &apos;lustre-mdt1&apos;: no such pool available&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;For &quot;zpool import&quot; command, if the -d option is not specified, the command will only search for devices in &quot;/dev&quot;. However, for ZFS storage pool which has file-based virtual device, we need explicitly specify the search directory otherwise the import command will not find the device.&lt;/p&gt;

&lt;p&gt;The patch for master branch is in &lt;a href=&quot;http://review.whamcloud.com/6358&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6358&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="58956" author="yujian" created="Tue, 21 May 2013 05:53:28 +0000"  >&lt;blockquote&gt;&lt;p&gt;The patch for master branch is in &lt;a href=&quot;http://review.whamcloud.com/6358&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6358&lt;/a&gt;.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;The patch was landed on both Lustre b2_4 and master branches.&lt;/p&gt;</comment>
                            <comment id="59181" author="utopiabound" created="Thu, 23 May 2013 17:16:18 +0000"  >&lt;p&gt;Reworked patch with fixes merged in:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/6429&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6429&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="62835" author="utopiabound" created="Tue, 23 Jul 2013 18:25:32 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/6429&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6429&lt;/a&gt; merged&lt;/p&gt;</comment>
                            <comment id="64082" author="yujian" created="Mon, 12 Aug 2013 14:21:15 +0000"  >&lt;blockquote&gt;&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/6429&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6429&lt;/a&gt; merged&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;The patch needs to be back-ported to Lustre b2_4 branch.&lt;/p&gt;</comment>
                            <comment id="64316" author="yujian" created="Thu, 15 Aug 2013 06:29:55 +0000"  >&lt;p&gt;Patch was landed on Lustre b2_4 branch.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvq0v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8083</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>