<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:17:37 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-15357] hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left</title>
                <link>https://jira.whamcloud.com/browse/LU-15357</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Looks like the same issue as &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-10433&quot; title=&quot;cfs_hash_destroy()) ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 3(1) is not  empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-10433&quot;&gt;&lt;del&gt;LU-10433&lt;/del&gt;&lt;/a&gt; to me.  Occurred during shutdown of an MDT.  vmcore-dmesg.txt file from crash dump shows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[7949565.740966] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) header@ffffa03aa81b09c0[0x4, 1, [0x1:0x0:0x0] hash exist]{

[7949565.756016] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) ....local_storage@ffffa03aa81b0a10

[7949565.768738] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) ....osd-zfs@ffffa03aa6c80140osd-zfs-object@ffffa03aa6c80140

[7949565.783886] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) } header@ffffa03aa81b09c0

[7949565.795741] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) header@ffffa03aa81b00c0[0x4, 1, [0xa:0x0:0x0] hash exist]{

[7949565.810788] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) ....local_storage@ffffa03aa81b0110

[7949565.823509] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) ....osd-zfs@ffffa03aa6c803c0osd-zfs-object@ffffa03aa6c803c0

[7949565.838649] LustreError: 4224:0:(osd_handler.c:1351:osd_device_free()) } header@ffffa03aa81b00c0

[7949565.850558] LustreError: 4224:0:(hash.c:1111:cfs_hash_destroy()) ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(3) is not  empty: 1 items left
[7949565.868513] LustreError: 4224:0:(hash.c:1111:cfs_hash_destroy()) LBUG
[7949565.875900] Pid: 4224, comm: umount 3.10.0-1160.36.2.1chaos.ch6.x86_64 #1 SMP Wed Jul 21 15:34:23 PDT 2021
[7949565.886871] Call Trace:
[7949565.889807]  [&amp;lt;ffffffffc12407ec&amp;gt;] libcfs_call_trace+0x8c/0xd0 [libcfs]
[7949565.897316]  [&amp;lt;ffffffffc12408ac&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[7949565.904423]  [&amp;lt;ffffffffc124f85c&amp;gt;] cfs_hash_putref+0x3cc/0x520 [libcfs]
[7949565.911928]  [&amp;lt;ffffffffc1529e54&amp;gt;] lu_site_fini+0x54/0xa0 [obdclass]
[7949565.919162]  [&amp;lt;ffffffffc133d0cb&amp;gt;] osd_device_free+0x9b/0x2e0 [osd_zfs]
[7949565.926665]  [&amp;lt;ffffffffc14fcf82&amp;gt;] class_free_dev+0x4c2/0x720 [obdclass]
[7949565.934267]  [&amp;lt;ffffffffc14fd3e0&amp;gt;] class_export_put+0x200/0x2d0 [obdclass]
[7949565.942059]  [&amp;lt;ffffffffc14fef05&amp;gt;] class_unlink_export+0x145/0x180 [obdclass]
[7949565.950159]  [&amp;lt;ffffffffc1514990&amp;gt;] class_decref+0x80/0x160 [obdclass]
[7949565.950169]  [&amp;lt;ffffffffc1514e13&amp;gt;] class_detach+0x1d3/0x300 [obdclass]
[7949565.950179]  [&amp;lt;ffffffffc151bae8&amp;gt;] class_process_config+0x1a38/0x2830 [obdclass]
[7949565.950189]  [&amp;lt;ffffffffc151cac0&amp;gt;] class_manual_cleanup+0x1e0/0x710 [obdclass]
[7949565.950197]  [&amp;lt;ffffffffc133cd15&amp;gt;] osd_obd_disconnect+0x165/0x1a0 [osd_zfs]
[7949565.950208]  [&amp;lt;ffffffffc1526cc6&amp;gt;] lustre_put_lsi+0x106/0x4d0 [obdclass]
[7949565.950217]  [&amp;lt;ffffffffc1527200&amp;gt;] lustre_common_put_super+0x170/0x270 [obdclass]
[7949565.950230]  [&amp;lt;ffffffffc154ea00&amp;gt;] server_put_super+0x120/0xd00 [obdclass]
[7949565.950235]  [&amp;lt;ffffffffbca61e5d&amp;gt;] generic_shutdown_super+0x6d/0x110
[7949565.950236]  [&amp;lt;ffffffffbca61f12&amp;gt;] kill_anon_super+0x12/0x20
[7949565.950246]  [&amp;lt;ffffffffc151f6b2&amp;gt;] lustre_kill_super+0x32/0x50 [obdclass]
[7949565.950247]  [&amp;lt;ffffffffbca6189e&amp;gt;] deactivate_locked_super+0x4e/0x70
[7949565.950248]  [&amp;lt;ffffffffbca61906&amp;gt;] deactivate_super+0x46/0x60
[7949565.950251]  [&amp;lt;ffffffffbca8372f&amp;gt;] cleanup_mnt+0x3f/0x80
[7949565.950253]  [&amp;lt;ffffffffbca837c2&amp;gt;] __cleanup_mnt+0x12/0x20
[7949565.950255]  [&amp;lt;ffffffffbc8c7cab&amp;gt;] task_work_run+0xbb/0xf0
[7949565.950258]  [&amp;lt;ffffffffbc82dd95&amp;gt;] do_notify_resume+0xa5/0xc0
[7949565.950262]  [&amp;lt;ffffffffbcfc44ef&amp;gt;] int_signal+0x12/0x17
[7949565.950293]  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We had never seen this before, but started seeing it frequently 2021-11-10.  Before that there were two changes that may be related:&lt;br/&gt;
1. We enabled changelogs on the filesystem (&quot;brass&quot;) and started consuming them&lt;br/&gt;
2. We updated the system to the above-mentioned kernel and lustre versions.  I&apos;ll find the prior versions and post them.&lt;/p&gt;

&lt;p&gt;We&apos;ve seen this crash 5 times so far (in about 1 month), when failing MDTs over for maintenance on the nodes.&lt;/p&gt;

&lt;p&gt;For Lustre patch stack, see &lt;a href=&quot;https://github.com/LLNL/lustre/releases/tag/2.12.7_2.llnl&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/LLNL/lustre/releases/tag/2.12.7_2.llnl&lt;/a&gt;&lt;br/&gt;
For ZFS patch stack, see &lt;a href=&quot;https://github.com/LLNL/zfs/releases/tag/zfs-0.7.11-8llnl&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/LLNL/zfs/releases/tag/zfs-0.7.11-8llnl&lt;/a&gt;&lt;br/&gt;
For SPL patch stack, see &lt;a href=&quot;https://github.com/LLNL/spl/releases/tag/spl-0.7.11-8llnl&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/LLNL/spl/releases/tag/spl-0.7.11-8llnl&lt;/a&gt;&lt;br/&gt;
(Note that the spl/zfs rpm version is misleading, it really is the spl- and zfs-0.7.11-8llnl tag that was used to build those rpms)&lt;/p&gt;</description>
                <environment>lustre-2.12.7_2.llnl-2&lt;br/&gt;
zfs-0.7.11-9.8llnl.ch6.x86_64&lt;br/&gt;
3.10.0-1160.45.1.1chaos.ch6.x86_64&lt;br/&gt;
rhel7-based</environment>
        <key id="67546">LU-15357</key>
            <summary>hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="ofaaland">Olaf Faaland</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Fri, 10 Dec 2021 00:02:41 +0000</created>
                <updated>Wed, 19 Apr 2023 03:39:53 +0000</updated>
                            <resolved>Mon, 22 Aug 2022 17:57:50 +0000</resolved>
                                    <version>Lustre 2.12.7</version>
                                    <fixVersion>Lustre 2.15.3</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="320528" author="pjones" created="Fri, 10 Dec 2021 18:43:20 +0000"  >&lt;p&gt;Mike&lt;/p&gt;

&lt;p&gt;Could you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="320529" author="adilger" created="Fri, 10 Dec 2021 18:47:47 +0000"  >&lt;p&gt;Is this being hit during normal testing, or are you running &lt;tt&gt;mds-survey&lt;/tt&gt; (&lt;tt&gt;lcfg cfg_device&lt;/tt&gt;) sometime before the filesystem is unmounted?  We hit a problem similar to this in testing some time ago that was related to how &lt;tt&gt;mds-survey&lt;/tt&gt; was being run.&lt;/p&gt;</comment>
                            <comment id="320546" author="ofaaland" created="Fri, 10 Dec 2021 21:21:08 +0000"  >&lt;blockquote&gt;&lt;p&gt;Is this being hit during normal testing&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This is a file system that has been in production for about 4 years, and at Lustre 2.12.X for around a year.&#160; Leading up to the first occurrence, and since then, it&apos;s been under use by ordinary users, and not used in any other way.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;are you running mds-survey (lcfg cfg_device) sometime before the filesystem is unmounted?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;No, we have not run mds-survey, nor any other test tools to my knowledge, against this system nor on the clients that mount it. No one else uses tools like that, besides Cameron and myself, and he would have coordinated it with me.&lt;/p&gt;</comment>
                            <comment id="320576" author="tappro" created="Fri, 10 Dec 2021 23:32:19 +0000"  >&lt;p&gt;Olaf, thanks for info. Andreas, I will port debug patch first, then will update EX-1873 for major branches&lt;/p&gt;</comment>
                            <comment id="320583" author="tappro" created="Sat, 11 Dec 2021 12:31:22 +0000"  >&lt;p&gt;Just few notes about the issue. First of all, the same issue may occur with different stack trace in newer code:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
[23370.525027] LustreError: 29448:0:(lu_object.c:1027:lu_site_print()) header@ffff90577d346600[0x4, 1, [0x200000003:0x0:0x0] hash exist]{

[23370.527626] LustreError: 29448:0:(lu_object.c:1027:lu_site_print()) ....local_storage@ffff90577d346658

[23370.529614] LustreError: 29448:0:(lu_object.c:1027:lu_site_print()) ....osd-ldiskfs@ffff9057177f8000osd-ldiskfs-object@ffff9057177f8000(i:ffff90575898f9e8:117/1506646403)[plain]

[23370.532701] LustreError: 29448:0:(lu_object.c:1027:lu_site_print()) } header@ffff90577d346600

[23370.534628] LustreError: 29448:0:(lu_object.c:1297:lu_device_fini()) ASSERTION( atomic_read(&amp;amp;d-&amp;gt;ld_ref) == 0 ) failed: Refcount is 1
[23370.536881] LustreError: 29448:0:(lu_object.c:1297:lu_device_fini()) LBUG
[23370.538240] Pid: 29448, comm: umount 3.10.0-1160.25.1.el7_lustre.ddn13.x86_64 #1 SMP Wed May 19 03:51:33 UTC 2021
[23370.540143] Call Trace:
[23370.540789] [&amp;lt;0&amp;gt;] libcfs_call_trace+0x90/0xf0 [libcfs]
[23370.541758] [&amp;lt;0&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
[23370.542946] [&amp;lt;0&amp;gt;] lu_device_fini+0xbb/0xc0 [obdclass]
[23370.543961] [&amp;lt;0&amp;gt;] dt_device_fini+0xe/0x10 [obdclass]
[23370.545011] [&amp;lt;0&amp;gt;] osd_device_free+0xae/0x2c0 [osd_ldiskfs]
[23370.546099] [&amp;lt;0&amp;gt;] class_free_dev+0x4c2/0x720 [obdclass]&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;the reason is the using rhashtable instead of cfs_hash, so ASSERTION is in &lt;tt&gt;lu_device_fini()&lt;/tt&gt; but the same problem.&lt;/p&gt;

&lt;p&gt;As for our case, this is due to changelog enabled most likely, because local storage FIDs are: &lt;tt&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;0x1:0x0:0x0&amp;#93;&lt;/span&gt;&lt;/tt&gt; and &lt;tt&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;0xa:0x0:0x0&amp;#93;&lt;/span&gt;&lt;/tt&gt; which are &lt;tt&gt;FID_SEQ_LLOG&lt;/tt&gt; and &lt;tt&gt;FID_SEQ_LLOG_NAME&lt;/tt&gt; respectively, so it looks like changelog llog object wasn&apos;t properly put or changelog subsystem is still in use. I am going to check related issues in that area first&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="320584" author="gerrit" created="Sat, 11 Dec 2021 12:56:26 +0000"  >&lt;p&gt;&quot;Mike Pershin &amp;lt;mpershin@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/45831&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45831&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; mdd: fix changelog context leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 4e3f19eb13650098c1f3b459b2273b45523869fc&lt;/p&gt;</comment>
                            <comment id="320585" author="gerrit" created="Sat, 11 Dec 2021 12:59:58 +0000"  >&lt;p&gt;&quot;Mike Pershin &amp;lt;mpershin@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/45832&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45832&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; mdd: fix changelog context leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: eccb58ff0700dfea617880dc22cec8ab8ce883ea&lt;/p&gt;</comment>
                            <comment id="320586" author="tappro" created="Sat, 11 Dec 2021 13:02:23 +0000"  >&lt;p&gt;This is the most obvious source of problem which is worth to check first&lt;/p&gt;</comment>
                            <comment id="321030" author="gerrit" created="Thu, 16 Dec 2021 14:26:40 +0000"  >&lt;p&gt;&quot;Mike Pershin &amp;lt;mpershin@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/45872&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45872&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; iokit: fix the obsolete usage of cfg_device&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: be87f89f6a6c72b101f7fd6792d4ed679f90de0e&lt;/p&gt;</comment>
                            <comment id="321390" author="gerrit" created="Thu, 23 Dec 2021 07:17:07 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/45831/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45831/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; mdd: fix changelog context leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: d083c93c6fd9251d6637d33029049b1d27d2a20a&lt;/p&gt;</comment>
                            <comment id="324462" author="gerrit" created="Sun, 30 Jan 2022 03:42:05 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/45832/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45832/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; mdd: fix changelog context leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_12&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 55fa745234ef0e1a01e0fb0622f1c00ecce047ea&lt;/p&gt;</comment>
                            <comment id="326317" author="gerrit" created="Tue, 15 Feb 2022 01:52:16 +0000"  >&lt;p&gt;&quot;Gian-Carlo DeFazio &amp;lt;defazio1@llnl.gov&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/46529&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/46529&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; mdd: fix changelog context leak&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_14&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: 8fa010f2202b97e0e4b76e295386ec438dd341b6&lt;/p&gt;</comment>
                            <comment id="329803" author="ofaaland" created="Mon, 21 Mar 2022 22:21:26 +0000"  >&lt;p&gt;Fixing the changelog context leak appears to have resolved the issue at our site.&lt;/p&gt;</comment>
                            <comment id="344289" author="ofaaland" created="Mon, 22 Aug 2022 17:57:39 +0000"  >&lt;p&gt;Since the backport was landed to b2_12, and the patch was already in 2.15.0, resolving this ticket.  I think the 2.14 backport can be abandoned.&lt;/p&gt;</comment>
                            <comment id="345291" author="gerrit" created="Thu, 1 Sep 2022 05:52:43 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/45872/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45872/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; iokit: fix the obsolete usage of cfg_device&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: a20b78a81d091cebd6b9e6c87537b2c955084cd5&lt;/p&gt;</comment>
                            <comment id="358102" author="gerrit" created="Fri, 6 Jan 2023 07:18:26 +0000"  >&lt;p&gt;&quot;Jian Yu &amp;lt;yujian@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/49566&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/49566&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; iokit: fix the obsolete usage of cfg_device&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_15&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: b02fcbd3e8763173fc997e2bb49bc06d8855fb67&lt;/p&gt;</comment>
                            <comment id="369841" author="gerrit" created="Wed, 19 Apr 2023 03:32:09 +0000"  >&lt;p&gt;&quot;Oleg Drokin &amp;lt;green@whamcloud.com&amp;gt;&quot; merged in patch &lt;a href=&quot;https://review.whamcloud.com/c/fs/lustre-release/+/49566/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/c/fs/lustre-release/+/49566/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15357&quot; title=&quot;hash.c:1111:cfs_hash_destroy() ASSERTION( !cfs_hash_with_assert_empty(hs) ) failed: hash lu_site_osd-zfs bucket 1(8) is not empty: 1 items left&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15357&quot;&gt;&lt;del&gt;LU-15357&lt;/del&gt;&lt;/a&gt; iokit: fix the obsolete usage of cfg_device&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: b2_15&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: 1e7624b711e173baf893cec0bfc687ddee000fc7&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="63520">LU-14553</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="67674">LU-15382</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i02c6n:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>