<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:59:09 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13189] ASSERTION( obj-&gt;oo_with_projid ) failed with 2.12.3</title>
                <link>https://jira.whamcloud.com/browse/LU-13189</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Seeing a crash fairly frequently on one of our oss&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;Feb 1 19:45&amp;#93;&lt;/span&gt; Lustre: work2-OST0002: Recovery over after 0:56, of 20 clients 20 recovered and 0 were evicted.&lt;br/&gt;
[ +0.000279] Lustre: work2-OST0002: deleting orphan objects from 0x0:268076412 to 0x0:268081537&lt;br/&gt;
[ +0.198454] LustreError: 14123:0:(osd_object.c:1345:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: &lt;br/&gt;
[ +0.000046] LustreError: 14123:0:(osd_object.c:1345:osd_attr_set()) LBUG&lt;br/&gt;
[ +0.000064] Pid: 14123, comm: ll_ost_io01_013 3.10.0-1062.9.1.el7.x86_64 #1 SMP Mon Dec 2 08:31:54 EST 2019&lt;br/&gt;
[ +0.000035] Call Trace:&lt;br/&gt;
[ +0.000018] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc10e87cc&amp;gt;&amp;#93;&lt;/span&gt; libcfs_call_trace+0x8c/0xc0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001388] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc10e887c&amp;gt;&amp;#93;&lt;/span&gt; lbug_with_loc+0x4c/0xa0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001275] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc179b458&amp;gt;&amp;#93;&lt;/span&gt; osd_attr_set+0xdd8/0xe50 &lt;span class=&quot;error&quot;&gt;&amp;#91;osd_zfs&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Message from syslogd@rit-ost1.las.iastate.edu at Feb 1 19:45:04 ...&lt;br/&gt;
 kernel:LustreError: 14123:0:(osd_object.c:1345:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: &lt;br/&gt;
[ +0.001272] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc190e622&amp;gt;&amp;#93;&lt;/span&gt; ofd_commitrw_write+0x13c2/0x1d40 &lt;span class=&quot;error&quot;&gt;&amp;#91;ofd&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001274] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc191212c&amp;gt;&amp;#93;&lt;/span&gt; ofd_commitrw+0x48c/0x9e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ofd&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001255] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc15ad0fa&amp;gt;&amp;#93;&lt;/span&gt; tgt_brw_write+0x10ba/0x1ce0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001586] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc15ab2ea&amp;gt;&amp;#93;&lt;/span&gt; tgt_request_handle+0xaea/0x1580 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001574] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc155029b&amp;gt;&amp;#93;&lt;/span&gt; ptlrpc_server_handle_request+0x24b/0xab0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001555] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffc1553bfc&amp;gt;&amp;#93;&lt;/span&gt; ptlrpc_main+0xb2c/0x1460 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
[ +0.001560] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffb28c61f1&amp;gt;&amp;#93;&lt;/span&gt; kthread+0xd1/0xe0&lt;br/&gt;
[ +0.001500] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffb2f8dd1d&amp;gt;&amp;#93;&lt;/span&gt; ret_from_fork_nospec_begin+0x7/0x21&lt;br/&gt;
[ +0.001487] &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffffffffff&amp;gt;&amp;#93;&lt;/span&gt; 0xffffffffffffffff&lt;br/&gt;
[ +0.001498] Kernel panic - not syncing: LBUG&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Not sure what exactly is causing it. Stack trace is from after the server reboots, as soon as recovery finishes and io starts again it happens. Originally I thought it was related to the recovery process and that aborting recovery would work around it, but it still occurs. I&apos;m not really sure if it&apos;s a particular file or an io pattern that&apos;s leading to it, and I&apos;ve not been able to narrow it down to a specific job in our environment.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</description>
                <environment>rhel 7.7 zfs-0.8.2 kernel 3.10.0-1062.9.1.el7.x86_64</environment>
        <key id="57966">LU-13189</key>
            <summary>ASSERTION( obj-&gt;oo_with_projid ) failed with 2.12.3</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="dongyang">Dongyang Li</assignee>
                                    <reporter username="snehring">Shane Nehring</reporter>
                        <labels>
                    </labels>
                <created>Sun, 2 Feb 2020 02:04:06 +0000</created>
                <updated>Wed, 5 Apr 2023 20:08:11 +0000</updated>
                            <resolved>Mon, 11 Jul 2022 12:59:39 +0000</resolved>
                                    <version>Lustre 2.12.3</version>
                    <version>Lustre 2.14.0</version>
                    <version>Lustre 2.15.0</version>
                                    <fixVersion>Lustre 2.16.0</fixVersion>
                    <fixVersion>Lustre 2.15.1</fixVersion>
                                        <due></due>
                            <votes>1</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="263189" author="snehring" created="Wed, 12 Feb 2020 21:10:59 +0000"  >&lt;p&gt;I ended up undefining ZFS_PROJINHERIT and recompiling so I could get the oss to stay up. It doesn&apos;t look like this code was touched in 2.12.4 (I had tried RC1 when I was running into this issue and it still occurred).&lt;/p&gt;</comment>
                            <comment id="263735" author="snehring" created="Thu, 20 Feb 2020 20:57:27 +0000"  >&lt;p&gt;Please let me know if you need any more information&lt;/p&gt;</comment>
                            <comment id="277374" author="snehring" created="Wed, 12 Aug 2020 20:40:41 +0000"  >&lt;p&gt;Had this start showing up on another oss/ost. Implemented the same work around there.&lt;/p&gt;</comment>
                            <comment id="314391" author="dvicker" created="Thu, 30 Sep 2021 13:47:52 +0000"  >&lt;p&gt;Shane, we are running into this same issue with our lustre file system - we are running 2.14.&#160; At what level did you do the #undef ZFS_PROJINHERIT?&#160; Just in that source file or for the whole Lustre build?&lt;/p&gt;</comment>
                            <comment id="314404" author="snehring" created="Thu, 30 Sep 2021 14:47:14 +0000"  >&lt;p&gt;What I&apos;ve done is add #undef ZFS_PROJINHERIT to lustre/osd-zfs/osd_internal.h&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;diff --git a/lustre/osd-zfs/osd_internal.h b/lustre/osd-zfs/osd_internal.h
index ae21447..58ef131 100644
--- a/lustre/osd-zfs/osd_internal.h
+++ b/lustre/osd-zfs/osd_internal.h
@@ -55,6 +55,7 @@
 #include &amp;lt;sys/dbuf.h&amp;gt;
 #include &amp;lt;sys/dmu_objset.h&amp;gt;
 #include &amp;lt;lustre_scrub.h&amp;gt;
+#undef ZFS_PROJINHERIT
 
 /**
  * By design including kmem.h overrides the Linux slab interfaces to provide &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Which keeps things up. It will at the very least make project quotas non functional, I believe.&lt;/p&gt;</comment>
                            <comment id="328725" author="rredl" created="Thu, 10 Mar 2022 09:27:19 +0000"  >&lt;p&gt;We also ran into the same issue with 2.14:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
kernel:LustreError: 24469:0:(osd_object.c:1353:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: 
kernel:LustreError: 24469:0:(osd_object.c:1353:osd_attr_set()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Is there any known workaround without recompiling?&lt;/p&gt;</comment>
                            <comment id="328743" author="dvicker" created="Thu, 10 Mar 2022 15:23:36 +0000"  >&lt;p&gt;We never found another workaround besides recompiling.&#160; I tried reaching out on the mailing list but didn&apos;t get any responses.&#160;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2021-September/017791.html&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2021-September/017791.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2021-October/017794.html&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2021-October/017794.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="328752" author="snehring" created="Thu, 10 Mar 2022 16:19:13 +0000"  >&lt;p&gt;Had either of you ever enabled project quotas?&lt;/p&gt;</comment>
                            <comment id="328756" author="rredl" created="Thu, 10 Mar 2022 16:36:59 +0000"  >&lt;p&gt;We migrated recently from 2.12.8 to 2.14.0. On this occasion we activated project quotas but have not actually used them yet. My currently solution is:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;project quotas disabled&lt;/li&gt;
	&lt;li&gt;all clients evicted&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;That broke the cycle of reboot, recovery, and kernel panic. During the last few hours everything worked fine. But we actually would like to use project quotas in future.&lt;/p&gt;</comment>
                            <comment id="328758" author="snehring" created="Thu, 10 Mar 2022 16:54:29 +0000"  >&lt;p&gt;Do you have the kernel patches for lustre applied on the oss?&lt;/p&gt;

&lt;p&gt;I believe, at least in my case, that this is the result of enabling project quotas with kernel version &amp;lt; 4.5 without the lustre kernel patches.&lt;/p&gt;</comment>
                            <comment id="328900" author="rredl" created="Fri, 11 Mar 2022 09:12:31 +0000"  >&lt;p&gt;We are using the ZFS backend for MDT and OST. All servers are installed with the packages from the official repository. Versions:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Kernel: 4.18.0-240.1.1.el8_lustre.x86_64&lt;/li&gt;
	&lt;li&gt;Lustre: 2.14.0-1.el8 (kmod)&lt;/li&gt;
	&lt;li&gt;ZFS: 2.0.0-1.el8 (kmod)&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;On a second identical system project quotas are already in use since a few weeks without any problems.&lt;/p&gt;</comment>
                            <comment id="328954" author="dvicker" created="Fri, 11 Mar 2022 18:40:39 +0000"  >&lt;p&gt;No, we are not patching the kernel.&#160; I realized after the fact that project quota won&apos;t work without the patched kernel.&#160; I&apos;m still a little concerned as to why this would panic the OSS.&#160; I would like to know how to clear the project ID&apos;s from our OSS so we could go back to the unmodified lustre source.&#160;&#160;&lt;/p&gt;

&lt;p&gt;We are also using ZFS for our MDT and OST&apos;s.&#160; Our servers are CentOS 7.9 with kernel 3.10.0-1160.31.1.el7.x86_64, lustre-2.14.0_1.el7, zfs 2.0.5.&#160;&#160;&lt;/p&gt;</comment>
                            <comment id="328991" author="snehring" created="Fri, 11 Mar 2022 21:35:37 +0000"  >&lt;p&gt;Hmm, that you&apos;re seeing this on a rhel 8 kernel kinda shoots that idea down. Unless you&apos;re hitting it by some other means.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;I&apos;ve recently explicitly disabled project quotas, I&apos;ll be doing an upgrade to 2.12.8 on Monday where I plan to not include my workaround to see if that resolves it. It&apos;s difficult for us to tell though, as it can take a day or two for someone to start hitting a file that has this problem.&lt;/p&gt;</comment>
                            <comment id="329190" author="snehring" created="Mon, 14 Mar 2022 20:14:32 +0000"  >&lt;p&gt;I actually just thought to look at the configuration logs for the osts, it doesn&apos;t look like I ever actually enabled project quotas. So this may be something waiting in the wings that an FS can hit regardless of whether those were ever enabled or not.&lt;/p&gt;</comment>
                            <comment id="329374" author="rredl" created="Wed, 16 Mar 2022 15:11:51 +0000"  >&lt;p&gt;Two new observations:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;project quotas work fine on the same server for a different OST.&lt;/li&gt;
	&lt;li&gt;disabling project quotas does not help. It worked fine for the system with disabled project quotas for the last five days. But today again the same issue.&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="336767" author="rredl" created="Sun, 5 Jun 2022 15:17:44 +0000"  >&lt;p&gt;The problem unfortunately persists with Lustre 2.15.0-RC5 and ZFS 2.0.7:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 809221:0:(osd_object.c:1300:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: 
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 809016:0:(osd_object.c:1300:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: 
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 808404:0:(osd_object.c:1300:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: 
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 808404:0:(osd_object.c:1300:osd_attr_set()) LBUG
Jun 05 17:03:04 z-ha-oss02b kernel: Pid: 808404, comm: ll_ost02_001 4.18.0-372.9.1.el8.x86_64 #1 SMP Tue May 10 08:57:35 EDT 2022
Jun 05 17:03:04 z-ha-oss02b kernel: Call Trace TBD:
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] libcfs_call_trace+0x6f/0x90 [libcfs]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] lbug_with_loc+0x3f/0x70 [libcfs]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] osd_attr_set+0xe3f/0xed0 [osd_zfs]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] ofd_attr_set+0x638/0x1080 [ofd]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] ofd_setattr_hdl+0x454/0x8d0 [ofd]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] tgt_request_handle+0xc93/0x1a40 [ptlrpc]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] ptlrpc_server_handle_request+0x323/0xbd0 [ptlrpc]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] ptlrpc_main+0xc06/0x1560 [ptlrpc]
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] kthread+0x10a/0x120
Jun 05 17:03:04 z-ha-oss02b kernel: [&amp;lt;0&amp;gt;] ret_from_fork+0x35/0x40
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: dumping log to /tmp/lustre-log.1654441384.808404
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 809221:0:(osd_object.c:1300:osd_attr_set()) LBUG
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 822248:0:(osd_object.c:1300:osd_attr_set()) ASSERTION( obj-&amp;gt;oo_with_projid ) failed: 
Jun 05 17:03:04 z-ha-oss02b kernel: LustreError: 822248:0:(osd_object.c:1300:osd_attr_set()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="338451" author="zino" created="Wed, 22 Jun 2022 23:42:19 +0000"  >&lt;p&gt;NSC hit this bug Yesterday. We have been running Rocky 8, Lustre 2.14 with ZFS 2.1.x on a few filesystems since March because we wanted dRAID support on the OSSs. We are not using project quota.&lt;/p&gt;

&lt;p&gt;When upgrading the remaining filesystems to 2.14 with OpenZFS 2.1.4 Yesterday, it ran for 6h before five of the newly upgraded OSSs PANICed, and then re-PANICed pretty quickly after reboot. After applying Shane&apos;s fix things remained stable over night. &lt;/p&gt;

&lt;p&gt;Today one OSS for another filesystem PANICed, so we decided to apply the fix to all remaining servers, including the MDSs. &lt;/p&gt;

&lt;p&gt;This is just to reaffirm that more people are seeing this and to offer thanks to Shane for sharing the workaround!&lt;/p&gt;</comment>
                            <comment id="338472" author="gerrit" created="Thu, 23 Jun 2022 07:16:59 +0000"  >&lt;p&gt;&quot;Li Dongyang &amp;lt;dongyangli@ddn.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/47709&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/47709&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13189&quot; title=&quot;ASSERTION( obj-&amp;gt;oo_with_projid ) failed with 2.12.3&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13189&quot;&gt;&lt;del&gt;LU-13189&lt;/del&gt;&lt;/a&gt; osd-zfs: fix assert on oo_with_projid&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 7d631c2f7f45caa36fbdc73b9d83bd98b43edd42&lt;/p&gt;</comment>
                            <comment id="338485" author="rredl" created="Thu, 23 Jun 2022 09:33:50 +0000"  >&lt;p&gt;Thanks a lot for the patch!&lt;/p&gt;

&lt;p&gt;Would this now only result in not hitting the LASSERT with an old object anymore, or would this now also result in the old object being updated on disk with an included project ID?&lt;/p&gt;</comment>
                            <comment id="338507" author="nscfreny" created="Thu, 23 Jun 2022 14:23:41 +0000"  >&lt;p&gt;Thanks for the patch.&lt;/p&gt;

&lt;p&gt;Running tests on a non production filesystem at NSC (Rocky 8.6 + ZFS 2.1.5 + Lustre 2.14.0).&lt;/p&gt;</comment>
                            <comment id="338653" author="dongyang" created="Fri, 24 Jun 2022 11:44:47 +0000"  >&lt;p&gt;I&apos;ve updated patch &lt;a href=&quot;https://review.whamcloud.com/47709&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/47709&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;to add project id for old objects as well.&lt;/p&gt;</comment>
                            <comment id="338658" author="rredl" created="Fri, 24 Jun 2022 12:36:20 +0000"  >&lt;p&gt;Is the update of old objects also done for objects on a ZFS based MDT? Would &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15640&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;https://jira.whamcloud.com/browse/LU-15640&lt;/a&gt; also be solved by this patch?&lt;/p&gt;</comment>
                            <comment id="338671" author="dongyang" created="Fri, 24 Jun 2022 13:37:51 +0000"  >&lt;p&gt;Yes it&apos;s for both MDT and OST.&lt;/p&gt;

&lt;p&gt;I think it should let you set project id on old dirs now. if you would test and give some feedback it would be great.&lt;/p&gt;

&lt;p&gt;BTW was zpool upgrade used during the zfs upgrade? what does zpool status -v show?&lt;/p&gt;</comment>
                            <comment id="338680" author="rredl" created="Fri, 24 Jun 2022 14:31:52 +0000"  >&lt;p&gt;Thank you very much! I will report back after a test.&lt;/p&gt;

&lt;p&gt;About the zpool: it was created on new hardware on ZFS 2.0.0, so it did have project quotas enabled by default. But the datasets have been copied over from old hardware with zfs send/resv and did not have project quotas before. I tried to upgrade the datasets with zfs upgrade, but that did not have any effect.&lt;/p&gt;</comment>
                            <comment id="338866" author="rredl" created="Mon, 27 Jun 2022 06:03:20 +0000"  >&lt;p&gt;After applying the patch to MDTs and OSTs project quotas work es expected on a system that has been migrated with zfs send/recv. Setting the project ID on old directories and files that have been there before the migration is not failing anymore.&lt;/p&gt;

&lt;p&gt;Thanks a lot, @dongyang!&lt;/p&gt;</comment>
                            <comment id="338869" author="dongyang" created="Mon, 27 Jun 2022 06:23:06 +0000"  >&lt;p&gt;Thanks for the feedback Robert.&lt;/p&gt;

&lt;p&gt;Good to know setting project ID is not failing. If you get the project id after setting on old files and dirs, it&apos;s showing the expected one right?&lt;/p&gt;

&lt;p&gt;Could you also verify after setting the project id on the old files/dirs, the project quota accounting is showing the expected numbers - they should reflect the old dirs/files?&lt;/p&gt;

&lt;p&gt;Cheers&lt;/p&gt;

&lt;p&gt;Dongyang&lt;/p&gt;</comment>
                            <comment id="338878" author="rredl" created="Mon, 27 Jun 2022 09:44:52 +0000"  >&lt;p&gt;Yes, I can confirm that after setting the project id it is also correctly shown by lfs project and also by lsattr -p. New files created in an old directory with with inherit flag set are also correctly inheriting the project id.&lt;/p&gt;

&lt;p&gt;The project quota also shows expected values.&lt;/p&gt;</comment>
                            <comment id="338947" author="dongyang" created="Tue, 28 Jun 2022 01:58:55 +0000"  >&lt;p&gt;Great, thanks for the update Robert.&lt;/p&gt;</comment>
                            <comment id="339333" author="gerrit" created="Thu, 30 Jun 2022 20:03:13 +0000"  >&lt;p&gt;&quot;Andreas Dilger &amp;lt;adilger@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/47846&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/47846&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13189&quot; title=&quot;ASSERTION( obj-&amp;gt;oo_with_projid ) failed with 2.12.3&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13189&quot;&gt;&lt;del&gt;LU-13189&lt;/del&gt;&lt;/a&gt; osd-zfs: add project id for old objects without ZFS_PROJID&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_15&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 26539cc3744155c6b6ad89fc0b5ef1413a8beb14&lt;/p&gt;</comment>
                            <comment id="340011" author="gerrit" created="Mon, 11 Jul 2022 06:49:58 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/47709/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/47709/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13189&quot; title=&quot;ASSERTION( obj-&amp;gt;oo_with_projid ) failed with 2.12.3&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13189&quot;&gt;&lt;del&gt;LU-13189&lt;/del&gt;&lt;/a&gt; osd-zfs: add project id for old objects without ZFS_PROJID&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: ec79791a7cda5b66649200b16a70167d86059e65&lt;/p&gt;</comment>
                            <comment id="340044" author="pjones" created="Mon, 11 Jul 2022 12:59:39 +0000"  >&lt;p&gt;Landed for 2.16&lt;/p&gt;</comment>
                            <comment id="340082" author="gerrit" created="Mon, 11 Jul 2022 17:35:15 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/47846/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/47846/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-13189&quot; title=&quot;ASSERTION( obj-&amp;gt;oo_with_projid ) failed with 2.12.3&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-13189&quot;&gt;&lt;del&gt;LU-13189&lt;/del&gt;&lt;/a&gt; osd-zfs: add project id for old objects without ZFS_PROJID&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_15&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 5a5dad1bc0147b63f377168dde3fe799156a5abd&lt;/p&gt;</comment>
                            <comment id="340118" author="kaizaad" created="Mon, 11 Jul 2022 22:32:41 +0000"  >&lt;p&gt;We just hit this today and Shane&apos;s&lt;/p&gt;

&lt;p&gt;#undef ZFS_PROJINHERIT&lt;/p&gt;

&lt;p&gt;patch seemed to fix it (thanks so much Shane!). Note we don&apos;t have project quotas enabled&lt;/p&gt;

&lt;p&gt;CentOS Linux release 7.9.2009 (Core)&lt;br/&gt;
Kernel 3.10.0-1160.49.1.el7_lustre.x86_64&lt;br/&gt;
Lustre&#160;2.12.9&lt;br/&gt;
MDT - ldiskfs&lt;br/&gt;
OSTs - zfs-0.8.6&lt;/p&gt;

&lt;p&gt;We have only been running with these versions for ~ 3 weeks. The OSTs were upgraded from zfs 0.7.13 and we did run &quot;zpool upgrade ostpool&quot;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;-k&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="69076">LU-15640</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00t13:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>