<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:20:27 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-15686] loopdev: op 0x9:(WRITE_ZEROES) not supported on Lustre / ZFS</title>
                <link>https://jira.whamcloud.com/browse/LU-15686</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Dear WC Team,&lt;/p&gt;

&lt;p&gt;To mitigate &lt;b&gt;high cost of calling ftruncate on ZFS Lustre&lt;/b&gt; (call used by fortran programs to extend the file size by certain chunk) - we are using localy mounted loop devices from Lustre.&lt;/p&gt;

&lt;p&gt;Underlying file is created as a sparse file, initialized as journal-less ext4 with lazy_itable_init=1,stride=32,stripe-width=256 parameters and mounted as temporary job storage.&lt;/p&gt;

&lt;p&gt;This solution works well with LDISKFS based OST. In the ZFS environment we are seeing a lots of op not supported errors:&lt;/p&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[1118332.841747] blk_update_request: operation not supported error, dev loop0, sector 457191680 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0
[1118349.509300] blk_update_request: operation not supported error, dev loop0, sector 457195776 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Except of significant amount of logs we haven&apos;t seen application instability so far.&lt;/p&gt;

&lt;p&gt;This happens regardless of nodiscard,discard mount flags&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</description>
                <environment>Lustre 2.15.0RC2 + ZFS 2.0.7&lt;br/&gt;
</environment>
        <key id="69243">LU-15686</key>
            <summary>loopdev: op 0x9:(WRITE_ZEROES) not supported on Lustre / ZFS</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="lflis">Lukasz Flis</reporter>
                        <labels>
                            <label>e2fsprogs</label>
                    </labels>
                <created>Thu, 24 Mar 2022 10:30:27 +0000</created>
                <updated>Fri, 25 Mar 2022 14:08:34 +0000</updated>
                                                                                <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="330071" author="lflis" created="Thu, 24 Mar 2022 11:38:33 +0000"  >&lt;p&gt;One clarification:&#160; problem didn&apos;t appear on LDISKFS on 2.12 (there were no ll_fallocate function in 2.12)&lt;/p&gt;

&lt;p&gt;The problem is likely to be&#160; present in LDISKFS and 2.15 as well&lt;/p&gt;</comment>
                            <comment id="330142" author="adilger" created="Thu, 24 Mar 2022 17:26:04 +0000"  >&lt;p&gt;The &quot;discard&quot; mount option for ext4 is not recommended to be used, as it causes a significant performance overhead.&lt;/p&gt;

&lt;p&gt;As for the errors appearing even without &quot;-o discard&quot;, is it possible that the errors are generated during mkfs time?  Running newer mke2fs will try to trim the whole device to free flash erase blocks and thin-provisioned storage, and use &quot;write same&quot; to avoid explicitly zeroing the inode table.  However, if &quot;lazy_itable_init&quot; is used, then the itable zeroing is deferred to a kernel thread that also tries to zero the inode table blocks with &quot;write same&quot; after the filesystem is mounted.&lt;/p&gt;

&lt;p&gt;There is an experimental patch to mke2fs that will assume the whole filesystem is &quot;zeroed&quot;, which can be used for the case of loopback devices that are on &lt;b&gt;newly created&lt;/b&gt; sparse files that are known to contain only zeroes:&lt;br/&gt;
&lt;a href=&quot;https://patchwork.ozlabs.org/project/linux-ext4/patch/20210921034203.323950-1-sarthakkukreti@google.com/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://patchwork.ozlabs.org/project/linux-ext4/patch/20210921034203.323950-1-sarthakkukreti@google.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This patch is included in upstream e2fsprogs v1.46.5, but not yet in the Lustre e2fsprogs-1.46.2-wc4.  We have never tested the &lt;tt&gt;assume_storage_prezeroed&lt;/tt&gt; function, but if this  version is only being used on the client against a loopback file, and not on the server, it shouldn&apos;t cause any problems.  &lt;/p&gt;</comment>
                            <comment id="330232" author="lflis" created="Fri, 25 Mar 2022 14:08:34 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;During mount we are supplying -o nodiscard flag which seems to be effective, i.e:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[Wed Mar 23 14:24:57 2022] EXT4-fs (loop0): mounted filesystem without journal. Opts: nodiscard
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I am sure that (looking at the logs where ) the &lt;b&gt;op not supported&lt;/b&gt;&#160; errors appear &lt;b&gt;after&lt;/b&gt; mkfs is completed and fs mounted. This seems to be an effect of using loopmounted ext4 fs.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="68901">LU-15607</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i02llr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>