<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:51:04 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-12265] LustreError: 141027:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5</title>
                <link>https://jira.whamcloud.com/browse/LU-12265</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Hello,&lt;br/&gt;
I have been running the IO500 benchmark suite recently across our all-flash NVMe-based filesystem and I have twice now come across the following errors that cause client IO errors and run failure, and I was hoping to find out more about what they indicate?&lt;/p&gt;

&lt;p&gt;The following are errors on one of the servers, which is a combined OSS &amp;amp; MDS, and is one of 24 such servers:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;May 06 09:29:43 dac-e-3 kernel: LustreError: 141015:0:(osd_iam_lfix.c:188:iam_lfix_init()) Skipped 11 previous similar messages
May 06 09:29:43 dac-e-3 kernel: LustreError: 141015:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5
May 06 08:49:09 dac-e-3 kernel: LustreError: 140855:0:(osd_iam_lfix.c:188:iam_lfix_init()) Skipped 9 previous similar messages
May 06 08:49:09 dac-e-3 kernel: LustreError: 140855:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5
May 06 08:47:25 dac-e-3 kernel: LustreError: 141027:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I see no other lustre errors on any other servers, or on any of the clients, but the client application sees an error.&lt;/p&gt;

&lt;p&gt;These errors are also only rarely seen so I&apos;m not sure if I can easily reproduce them - I have been running this benchmark suite very intensely the past few days and we are fairly frequently re-formatting all of the hardware and rebuilding filesystems on this hardware as it is a pool of hardware that we use in a filesystem-on-demand style of usage.&lt;/p&gt;

&lt;p&gt;At the time of the errors I was running an mdtest benchmark from the &apos;md easy&apos; portion of the suite, with 128 clients, 32 ranks, so a very large number of files were being created at the time:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;mdtest-1.9.3 was launched with 2048 total task(s) on 128 node(s)
Command line used: /home/mjr208/projects/benchmarking/io-500-src-stonewall-fix/bin/mdtest &quot;-C&quot; &quot;-n&quot; &quot;140000&quot; &quot;-u&quot; &quot;-L&quot; &quot;-F&quot; &quot;-d&quot; &quot;/dac/fs1/mjr208/job11312297-2019-05-05-2356/mdt_easy&quot;
Path: /dac/fs1/mjr208/job11312297-2019-05-05-2356
FS: 412.6 TiB   Used FS: 24.2%   Inodes: 960.0 Mi   Used Inodes: 0.0%

2048 tasks, 286720000 files
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
Abort(-1) on node 480 (rank 480 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 480
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
Abort(-1) on node 486 (rank 486 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 486
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
Abort(-1) on node 488 (rank 488 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 488
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
Abort(-1) on node 491 (rank 491 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 491
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
Abort(-1) on node 492 (rank 492 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 492
ior ERROR: open64() failed, errno 5, Input/output error (aiori-POSIX.c:376)
Abort(-1) on node 493 (rank 493 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 493
Abort(-1) on node 482 (rank 482 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 482
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The filesystem itself is configured using DNE and specifically we are using DNE2 striped directories for all mdtest runs. We are using a large number of MDTs, 24 at the moment, one-per server, (which other than this problem, is otherwise working excellently), and the directory-stripe is &apos;-1&apos;, so we are striping all the directories over all 24 MDTs, one per server. Each server contains 12 NVMe drives, and we partition one of the drives so it has both an OST and MDT partition.&lt;/p&gt;

&lt;p&gt;Lustre and Kernel versions are as follows:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Server: kernel-3.10.0-957.el7_lustre.x86_64
Server: lustre-2.12.0-1.el7.x86_64

Clients: kernel-3.10.0-957.10.1.el7.x86_64
Clients: lustre-client-2.10.7-1.el7.x86_64
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt; 

&lt;p&gt;Could I get some advice on what this error indicates here?&lt;/p&gt;</description>
                <environment></environment>
        <key id="55574">LU-12265</key>
            <summary>LustreError: 141027:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="4" iconUrl="https://jira.whamcloud.com/images/icons/statuses/reopened.png" description="This issue was once resolved, but the resolution was deemed incorrect. From here issues are either marked assigned or resolved.">Reopened</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="mrb">Matt R&#225;s&#243;-Barnett</reporter>
                        <labels>
                    </labels>
                <created>Mon, 6 May 2019 09:05:28 +0000</created>
                <updated>Fri, 28 Jan 2022 01:42:05 +0000</updated>
                                            <version>Lustre 2.12.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>13</watches>
                                                                            <comments>
                            <comment id="246748" author="pjones" created="Mon, 6 May 2019 17:18:10 +0000"  >&lt;p&gt;Hongchao&lt;/p&gt;

&lt;p&gt;Could you please investigate?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="246881" author="hongchao.zhang" created="Thu, 9 May 2019 11:57:31 +0000"  >&lt;p&gt;This error shows the index files (OI, scrub etc) are corrupted, it could be caused by some hardware issue or disk driver issues,&lt;br/&gt;
Is there any log related to this kind of failure (such as the errors contain &quot;LDISKFS-fs error&quot; or &quot;JBD2&quot;)?&lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;</comment>
                            <comment id="246958" author="mrb" created="Fri, 10 May 2019 08:22:39 +0000"  >&lt;p&gt;Hi Hongchao, I have another ticket open &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12268&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;https://jira.whamcloud.com/browse/LU-12268&lt;/a&gt; where I do see &apos;LDISKFS-fs error&apos; messages, however these are usually not  occurring at the same time as the above errors.&lt;/p&gt;

&lt;p&gt;They are both seen though when I&apos;m doing the same benchmark, so perhaps they are related. My normal mode of operation is to re-create the entire filesystem after I get these errors, but I have seen them relatively frequently when running these large-scale mdtest benchmarks.&lt;/p&gt;

&lt;p&gt;I also, &lt;b&gt;only&lt;/b&gt; get these errors when I&apos;m striping the mdtest directories over all 24 MDTs in the filesystem, eg:&lt;/p&gt;

&lt;p&gt;lfs setdirstripe -c -1 -D mdt_easy&lt;/p&gt;

&lt;p&gt;I don&apos;t see any problems when I&apos;m either not striping every directory (just using DNE remote directories), eg:&lt;/p&gt;

&lt;p&gt;lfs setdirstripe -c -1 mdt_easy  (so all the subdirectories are on a different MDT, but are not themselves striped) &lt;/p&gt;

&lt;p&gt;or the directory stripe_count: &amp;lt;= 12, eg:&lt;/p&gt;

&lt;p&gt;lfs setdirstripe -c 12 -D mdt_easy&lt;/p&gt;

&lt;p&gt;I haven&apos;t probed stripe_count: (12-24] yet. So perhaps this is DNE related?&lt;/p&gt;

&lt;p&gt;I can&apos;t rule out a hardware or driver issue, however I&apos;ve seen this error move around and occur on different MDTs in the filesystem, and as mentioned above it appears to go away if I don&apos;t use large stripe_count.&lt;/p&gt;</comment>
                            <comment id="247325" author="hongchao.zhang" created="Fri, 17 May 2019 11:12:16 +0000"  >&lt;p&gt;I have tried to run mdtest with 24 MDT in one VM, and don&apos;t encounter this problem.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@zhanghc tests]# ../utils/lfs getstripe -D /mnt/lustre/mdt_easy
(Default)/mnt/lustre/mdt_easy
lmv_stripe_count: 4294967295 lmv_stripe_offset: -1 lmv_hash_type: fnv_1a_64

[root@zhanghc mdtest]# ./mdtest -C -n 70000 -u -L -F -d /mnt/lustre/mdt_easy
-- started at 05/15/2019 10:29:42 --

mdtest-1.9.3 was launched with 1 total task(s) on 1 node(s)
Command line used: ./mdtest -C -n 70000 -u -L -F -d /mnt/lustre/mdt_easy
Path: /mnt/lustre
FS: 0.6 GiB   Used FS: 5.1%   Inodes: 0.2 Mi   Used Inodes: 3.4%

1 tasks, 70000 files

SUMMARY: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :       2066.418       2066.418       2066.418          0.000
   File stat         :          0.000          0.000          0.000          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :          0.000          0.000          0.000          0.000
   Tree creation     :         11.047         11.047         11.047          0.000
   Tree removal      :          0.000          0.000          0.000          0.000

-- finished at 05/15/2019 10:30:15 --
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="247330" author="mrb" created="Fri, 17 May 2019 14:19:51 +0000"  >&lt;p&gt;Hi Hongchao, I&apos;ve just started seeing this error again the past couple of days again unfortunately, and as mentioned in my last comment, this is where I am using a striped directory only on the parent directory, to ensure that all the subdirectories are put on different MDTs.&lt;/p&gt;

&lt;p&gt;The application sees an I/O error failing to make the directory:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;error could not create directory &quot;/dac/fs1/mjr208/job11535231-2019-05-17-1421/mdt_easy/#test-dir.0-0/mdtest_tree.408.0/&quot;
ior ERROR: open64() failed, errno 2, No such file or directory (aiori-POSIX.c:407)
Abort(-1) on node 408 (rank 408 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 408
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and I get the error on the MDS as mentioned before.&lt;/p&gt;

&lt;p&gt;I&apos;m not sure what else would be useful to look for on this? I don&apos;t see any other syslog messages indicating hardware/driver issues of the device, and I&apos;ve seen this error on different MDS/MDTs.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Matt&lt;/p&gt;</comment>
                            <comment id="247523" author="hongchao.zhang" created="Wed, 22 May 2019 13:13:59 +0000"  >&lt;p&gt;Hi, I have tried to reproduce this issue in local VMs, but it can&apos;t be reproduced.&lt;br/&gt;
Is it possible to login to your test site to check it? Thanks!&lt;/p&gt;</comment>
                            <comment id="247649" author="mrb" created="Fri, 24 May 2019 15:03:27 +0000"  >&lt;p&gt;Hi Hongchao,&lt;br/&gt;
This might be possible to do, but this was a relatively rare bug for us so I&apos;m tempted to leave this as a &apos;Can&apos;t Reproduce&apos; until we are in a position to run into this again. It was only showing during the most strenous runs for IO500 and I didn&apos;t investigate other factors first - I wanted to raise the ticket in case the errors indicated anything obvious.&lt;/p&gt;

&lt;p&gt;So perhaps we close this and my other ticket for now, and if I run into this issue again I can reopen and get you onto the platform to investigate?&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Matt&lt;/p&gt;</comment>
                            <comment id="248759" author="adilger" created="Fri, 7 Jun 2019 22:16:22 +0000"  >&lt;p&gt;This looks like it is still being hit.&lt;/p&gt;</comment>
                            <comment id="248972" author="hongchao.zhang" created="Tue, 11 Jun 2019 12:04:31 +0000"  >&lt;p&gt;Hi Andreas,&lt;br/&gt;
Where do you encounter this problem? can it be reproducible? Thanks!&lt;/p&gt;</comment>
                            <comment id="249060" author="adilger" created="Tue, 11 Jun 2019 22:41:43 +0000"  >&lt;p&gt;Matt has been hitting it regularly in his large-scale IO-500 runs in CAM-79.  &lt;/p&gt;</comment>
                            <comment id="255252" author="zam" created="Mon, 23 Sep 2019 10:43:41 +0000"  >&lt;p&gt;r/w semaphores are broken in RH kernels up to RH7.7 , see &lt;a href=&quot;https://access.redhat.com/solutions/3393611&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://access.redhat.com/solutions/3393611&lt;/a&gt;&lt;br/&gt;
It would be good to check whether the problem still exists with kernel kernel-3.10.0-1062.el7 :&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Red Hat Enterprise Linux 7.7&lt;/p&gt;

&lt;p&gt;The issue was fixed in kernel-3.10.0-1062.el7 from Errata RHSA-2019:2029&lt;/p&gt;&lt;/blockquote&gt;

</comment>
                            <comment id="286866" author="artem_blagodarenko" created="Mon, 7 Dec 2020 13:04:03 +0000"  >&lt;p&gt;Faced with this problem on one of our clusters. While researching iam code found there is dead code. Created &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-14188&quot; title=&quot;rw_semaphore in the iam_container structure that never been used&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-14188&quot;&gt;&lt;del&gt;LU-14188&lt;/del&gt;&lt;/a&gt; and &lt;a href=&quot;https://review.whamcloud.com/#/c/40890/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/#/c/40890/&lt;/a&gt; that removes this useless code.&lt;/p&gt;

&lt;p&gt;I wonder if this semaphore was used somewhere in past and can be useful now.  &lt;/p&gt;</comment>
                            <comment id="287189" author="adilger" created="Thu, 10 Dec 2020 11:26:35 +0000"  >&lt;p&gt;Artem, I think this problem was fixed in the RHEL7 kernel.  It was seen by a number of sites that had this same kernel, but upgrading to the later RHEL7 kernels fixed the problem.&lt;/p&gt;</comment>
                            <comment id="287191" author="artem_blagodarenko" created="Thu, 10 Dec 2020 11:53:09 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/ViewProfile.jspa?name=adilger&quot; class=&quot;user-hover&quot; rel=&quot;adilger&quot;&gt;adilger&lt;/a&gt;, do you know the exact rootcause of the problem? I am asking to know what patches we need to prevent this bug happen again. Thanks.&lt;/p&gt;</comment>
                            <comment id="287192" author="adilger" created="Thu, 10 Dec 2020 12:04:01 +0000"  >&lt;p&gt;See earlier comment in this ticket:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;r/w semaphores are broken in RH kernels up to RH7.7 , see &lt;a href=&quot;https://access.redhat.com/solutions/3393611&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://access.redhat.com/solutions/3393611&lt;/a&gt;&lt;br/&gt;
It would be good to check whether the problem still exists with kernel kernel-3.10.0-1062.el7 :&lt;/p&gt;

&lt;p&gt;    Red Hat Enterprise Linux 7.7&lt;/p&gt;

&lt;p&gt;    The issue was fixed in kernel-3.10.0-1062.el7 from Errata RHSA-2019:2029&lt;/p&gt;&lt;/blockquote&gt;</comment>
                            <comment id="287287" author="artem_blagodarenko" created="Fri, 11 Dec 2020 06:35:27 +0000"  >&lt;p&gt;We faced the problem while having these patches applied&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;
2291-kernel-locking-rwsem-Fix-possible-missed-wakeup.patch
2290-kernel-futex-Fix-possible-missed-wakeup.patch
2289-kernel-futex-Use-smp_store_release-in-mark_wake_fute.patch
2288-kernel-sched-wake_q-Fix-wakeup-ordering-&lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt;-wake_q.patch &lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;There are no other rwsem-related patches to apply and the problem still exists.&lt;/p&gt;</comment>
                            <comment id="313280" author="adilger" created="Fri, 17 Sep 2021 19:40:11 +0000"  >&lt;p&gt;Aside from determining and fixing the root cause of this IAM corruption, it makes sense for the IAM/OSD code to handle this in a more robust manner. If the IAM block is corrupted, the current remedy is only to delete and rebuild all the OI files. It would be useful (and not &lt;em&gt;more&lt;/em&gt; disruptive) to just reset the corrupt IAM block and then trigger a full OI Scrub to verify/reinsert any missing FIDs. This makes the OI file at least somewhat self-healing.&lt;/p&gt;

&lt;p&gt;As part of this process, it &lt;em&gt;might&lt;/em&gt; make sense to try and scan/repair the IAM file itself.  However, since we need a full OI Scrub to find any FIDs affected by the corruption, it probably makes more sense to build a new &quot;shadow OI&quot; file (for the corrupted OI file only, because they can grow to tens of GB in size for a large MDT). That is what &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15016&quot; title=&quot;OI Scrub backup and rebuild&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15016&quot;&gt;LU-15016&lt;/a&gt; is about. &lt;/p&gt;

&lt;p&gt;Since there are other benefits to rebuilding the OI file (compact/free old entries, improve insertion speed) I don&apos;t think it is worthwhile to spend too much time on repairing the existing OI file, just enough to keep the system usable.  &lt;/p&gt;</comment>
                            <comment id="314119" author="gerrit" created="Tue, 28 Sep 2021 07:51:50 +0000"  >&lt;p&gt;&quot;Hongchao Zhang &amp;lt;hongchao@whamcloud.com&amp;gt;&quot; uploaded a new patch: &lt;a href=&quot;https://review.whamcloud.com/45071&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45071&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12265&quot; title=&quot;LustreError: 141027:0:(osd_iam_lfix.c:188:iam_lfix_init()) Bad magic in node 1861726 #34: 0xcc != 0x1976 or bad cnt: 0 170: rc = -5&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12265&quot;&gt;LU-12265&lt;/a&gt; osd: fix corrupted OI file online&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: c0b2d11c325e042f724447ee45bc1ca1d2ff5379&lt;/p&gt;</comment>
                            <comment id="314142" author="aboyko" created="Tue, 28 Sep 2021 13:36:25 +0000"  >&lt;p&gt;FYI I pushed patch &lt;a href=&quot;https://review.whamcloud.com/45072&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://review.whamcloud.com/45072&lt;/a&gt; &quot;&lt;tt&gt;&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-12268&quot; title=&quot;LDISKFS-fs error: ldiskfs_find_dest_de:2066: bad entry in directory: rec_len is smaller than minimal - offset=0( 0), inode=201, rec_len=0, name_len=0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-12268&quot;&gt;&lt;del&gt;LU-12268&lt;/del&gt;&lt;/a&gt; osd: BUG_ON for IAM corruption&lt;/tt&gt;&quot;. It detects IAM bh overflow early and fail the node. This prevents on disk FS corruption, and gets more data for analyze.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="61883">LU-14188</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="66119">LU-15016</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="55581">LU-12268</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="61883">LU-14188</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00fvr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>