<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 03:00:43 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-13372] fir-MDD0001: there are no more free slots in catalog changelog_catalog</title>
                <link>https://jira.whamcloud.com/browse/LU-13372</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;We hit a LBUG on 2.12.3 last night:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[8325133.774837] LustreError: 59728:0:(llog_osd.c:617:llog_osd_write_rec()) fir-MDT0001-osd: index 14075 already set in log bitmap
[8325133.786320] LustreError: 59728:0:(llog_osd.c:619:llog_osd_write_rec()) LBUG
[8325133.793471] Pid: 59728, comm: mdt_rdpg02_021 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Also, after a restart:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Mar 18 08:35:18 fir-md1-s2 kernel: LustreError: 34211:0:(llog_osd.c:617:llog_osd_write_rec()) fir-MDT0001-osd: index 14096 already set in log bitmap
Mar 18 08:35:18 fir-md1-s2 kernel: LustreError: 34211:0:(llog_osd.c:619:llog_osd_write_rec()) LBUG
Mar 18 08:35:18 fir-md1-s2 kernel: Pid: 34211, comm: mdt_rdpg01_053 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019
Message from syslogd@fir-md1-s2 at Mar 18 08:35:18 ...
 kernel:LustreError: 34211:0:(llog_osd.c:619:llog_osd_write_rec()) LBUG
Mar 18 08:35:18 fir-md1-s2 kernel: Call Trace:
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0ccd7cc&amp;gt;] libcfs_call_trace+0x8c/0xc0 [libcfs]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0ccd87c&amp;gt;] lbug_with_loc+0x4c/0xa0 [libcfs]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0df72ba&amp;gt;] llog_osd_write_rec+0x16ca/0x1730 [obdclass]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc18fa06f&amp;gt;] mdd_changelog_write_rec+0x2f/0x120 [mdd]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0de81ab&amp;gt;] llog_write_rec+0xcb/0x520 [obdclass]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0dece0c&amp;gt;] llog_cat_new_log+0x62c/0xce0 [obdclass]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0ded6bb&amp;gt;] llog_cat_add_rec+0x1fb/0x880 [obdclass]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0de5180&amp;gt;] llog_add+0x80/0x1a0 [obdclass]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc18fa2dd&amp;gt;] mdd_changelog_store+0x17d/0x520 [mdd]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc1907374&amp;gt;] mdd_changelog_data_store_by_fid+0x1d4/0x350 [mdd]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc19096c2&amp;gt;] mdd_changelog_data_store+0x142/0x200 [mdd]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc190ab38&amp;gt;] mdd_close+0xae8/0xf30 [mdd]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc179597e&amp;gt;] mdt_mfd_close+0x3fe/0x860 [mdt]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc179b291&amp;gt;] mdt_close_internal+0x121/0x220 [mdt]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc179b5b0&amp;gt;] mdt_close+0x220/0x780 [mdt]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc117636a&amp;gt;] tgt_request_handle+0xaea/0x1580 [ptlrpc]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc111d24b&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffffc1120bac&amp;gt;] ptlrpc_main+0xb2c/0x1460 [ptlrpc]
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffff890c2e81&amp;gt;] kthread+0xd1/0xe0
Mar 18 08:35:18 fir-md1-s2 kernel:  [&amp;lt;ffffffff89777c24&amp;gt;] ret_from_fork_nospec_begin+0xe/0x21
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;



&lt;p&gt;We stopped the changelog reader (Robinhood) for this MDT and then we were able to start it, but then we hit the following issues:&lt;br/&gt;
 &#160;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[Wed Mar 18 10:28:44 2020][ 520.331132] Lustre: 21084:0:(llog_cat.c:98:llog_cat_new_log()) fir-MDD0001: there are no more free slots in catalog changelog_catalog
[Wed Mar 18 10:28:44 2020][ 520.343129] Lustre: 21084:0:(llog_cat.c:98:llog_cat_new_log()) Skipped 159923 previous similar messages
[Wed Mar 18 10:28:44 2020][ 520.353111] LustreError: 21179:0:(llog_cat.c:530:llog_cat_current_log()) fir-MDD0001: next log does not exist!
[Wed Mar 18 10:28:44 2020][ 520.363117] LustreError: 21179:0:(llog_cat.c:530:llog_cat_current_log()) Skipped 137757 previous similar messages
[Wed Mar 18 10:28:48 2020][ 524.957681] LustreError: 21095:0:(mdd_dir.c:1065:mdd_changelog_ns_store()) fir-MDD0001: cannot store changelog record: type = 1, name = &apos;alignment.eigen.indiv&apos;, t = [0x240049459:0x9f67:0x0], p = [0x2400478b1:0x1e9d6:0x0]: rc = -28
[Wed Mar 18 10:28:54 2020][ 530.656413] LustreError: 20878:0:(mdd_dir.c:1065:mdd_changelog_ns_store()) fir-MDD0001: cannot store changelog record: type = 6, name = &apos;alignment.eigen.indiv&apos;, t = [0x240049419:0xea19:0x0], p = [0x24004ac39:0x22e6:0x0]: rc = -5
[Wed Mar 18 10:28:54 2020][ 530.676655] LustreError: 20878:0:(mdd_dir.c:1065:mdd_changelog_ns_store()) Skipped 2 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Full console log attached as  &lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.whamcloud.com/secure/attachment/34476/34476_fir-md1-s2_console-changelogs_no_more_free_slots_2_12_4.log&quot; title=&quot;fir-md1-s2_console-changelogs_no_more_free_slots_2_12_4.log attached to LU-13372&quot;&gt;fir-md1-s2_console-changelogs_no_more_free_slots_2_12_4.log&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.whamcloud.com/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt; &lt;/p&gt;

&lt;p&gt;Then, &lt;b&gt;we upgraded the MDS from 2.12.3 to 2.13.4&lt;/b&gt; and the issue was still there.&lt;/p&gt;

&lt;p&gt;Also users reported the following errors also when creating new files:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[zhengyuh@sh02-ln01 login /scratch/users/zhengyuh/$ touch aa

touch: cannot touch &#8216;aa&#8217;: Bad address
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Because this was kind of an emergency, I applied the following procedure to recreate the changelog files:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-md1-s2 1]# llog_reader changelog_catalog &amp;gt;/tmp/mdt1_changelog_catalog
[root@fir-md1-s2 1]# llog_reader changelog_users
Bit 1 of 1 not set
rec #2 type=1064553b len=64 offset 8256
Header size : 8192
Time : Thu Jan 24 14:04:30 2019
Number of records: 1
Target uuid : 
-----------------------
#02 (064)id=[0x410fe:0x1:0x0]:0 path=O/1/d30/266494
[root@fir-md1-s2 1]# llog_reader changelog_users &amp;gt;/tmp/mdt1_changelog_users
[root@fir-md1-s2 1]# mv changelog_catalog chagnelog_catalog.bak
[root@fir-md1-s2 1]# mv changelog_users changelog_users.bak
[root@fir-md1-s2 1]# llog_reader O/1/d29/680157 &amp;gt;/tmp/mdt1_680157
[root@fir-md1-s2 1]# ls O/1/d5/5
O/1/d5/5
[root@fir-md1-s2 1]# mv O/1/d5/5 O/1/d5/5.bak
[root@fir-md1-s2 1]# mv O/1/d29/680157 O/1/d29/680157.bak
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I&apos;m attaching the corresponding files for investigation.&lt;/p&gt;


&lt;p&gt;This MDT is far from being full:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fir-md1-s2 ~]# df -h -t lustre
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/md1-rbod1-mdt1   18T  826G   16T   5% /mnt/fir/mdt/1
[root@fir-md1-s2 ~]# df -i -t lustre
Filesystem                    Inodes     IUsed     IFree IUse% Mounted on
/dev/mapper/md1-rbod1-mdt1 288005760 111873725 176132035   39% /mnt/fir/mdt/1
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Do you have an idea of what happened? How can we avoid this issue in the future? Thanks!&lt;/p&gt;</description>
                <environment>CentOS 7.6</environment>
        <key id="58421">LU-13372</key>
            <summary>fir-MDD0001: there are no more free slots in catalog changelog_catalog</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="sthiell">Stephane Thiell</reporter>
                        <labels>
                    </labels>
                <created>Wed, 18 Mar 2020 18:44:47 +0000</created>
                <updated>Fri, 25 Mar 2022 14:24:33 +0000</updated>
                                            <version>Lustre 2.12.3</version>
                    <version>Lustre 2.12.4</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="265593" author="sthiell" created="Wed, 18 Mar 2020 18:47:56 +0000"  >&lt;p&gt;Additional information, Robinhood was a bit late on this changelog reader but less than 48 hours:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2020/03/18 08:37:25 [9994/1] STATS | ChangeLog reader #1:
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; fs_name&#160; &#160; = &#160; fir
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; mdt_name &#160; = &#160; MDT0001
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; reader_id&#160; = &#160; cl3
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; records read&#160; &#160; &#160; &#160; = 22713006
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; interesting records = 36470
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; suppressed records&#160; = 20170655
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; records pending &#160; &#160; = 15325
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; status&#160; &#160; &#160; &#160; &#160; &#160; &#160; = busy
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; last received: rec_id=33853864223, rec_time=2020/03/16 15:18:15.110496, received at 2020/03/17 21:04:32.001942
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; &#160; &#160; receive speed: 0.00 rec/sec, log/real time ratio: 0.00
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; last pushed: rec_id=33844744542, rec_time=2020/03/16 15:13:27.045815, pushed at 2020/03/18 08:35:18.442094
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; &#160; &#160; push speed: 3781.59 rec/sec, log/real time ratio: 0.12
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; last committed: rec_id=33844741446, rec_time=2020/03/16 15:13:26.962887, committed at 2020/03/18 08:35:18.462873
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; &#160; &#160; commit speed: 4479.34 rec/sec, log/real time ratio: 0.14
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; last cleared: rec_id=33844716263, rec_time=2020/03/16 15:13:26.236501, cleared at 2020/03/18 08:35:17.940335
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; &#160; &#160; clear speed: 4844.73 rec/sec, log/real time ratio: 0.15
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; ChangeLog stats:
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; MARK: 0, CREAT: 2520877, MKDIR: 296, HLINK: 0, SLINK: 83, MKNOD: 0, UNLNK: 2521000
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; RMDIR: 33, RENME: 543, RNMTO: 0, OPEN: 0, CLOSE: 10253618, LYOUT: 82, TRUNC: 7416466
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; SATTR: 7, XATTR: 0, HSM: 0, MTIME: 1, CTIME: 0, ATIME: 0, MIGRT: 0, FLRW: 0, RESYNC: 0
2020/03/18 08:37:25 [9994/1] STATS |&#160; &#160; GXATR: 0, NOPEN: 0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;BTW we have still a lot of problem with the changelog readers in 2.12, leading to stuck readers...  We have several workarounds in place (like restarting Robinhood), but that isn&apos;t always enough to keep up.&lt;/p&gt;</comment>
                            <comment id="265658" author="pjones" created="Thu, 19 Mar 2020 14:41:24 +0000"  >&lt;p&gt;Mike&lt;/p&gt;

&lt;p&gt;Could you please advise?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="265862" author="tappro" created="Mon, 23 Mar 2020 13:55:59 +0000"  >&lt;p&gt;That can be related to llog catalog wrapping index miscalculations but that is just quick thought. I am analyzing supplied files right now&lt;/p&gt;</comment>
                            <comment id="266013" author="sthiell" created="Tue, 24 Mar 2020 16:28:37 +0000"  >&lt;p&gt;Thanks Mike for taking a look! Now that we are using 2.12.4 on all servers for this system (Fir), we see occasional backtraces like the following one on the MDS and Robinhood still have problem processing changelogs:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Mar 24 06:42:49 fir-md1-s2 kernel: LNet: Service thread pid 22333 was inactive for 202.34s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Mar 24 06:42:49 fir-md1-s2 kernel: Pid: 22333, comm: mdt00_007 3.10.0-957.27.2.el7_lustre.pl2.x86_64 #1 SMP Thu Nov 7 15:26:16 PST 2019
Mar 24 06:42:49 fir-md1-s2 kernel: Call Trace:
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffb0788c47&amp;gt;] call_rwsem_down_write_failed+0x17/0x30
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e58649&amp;gt;] llog_cat_id2handle+0x69/0x5b0 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e59670&amp;gt;] llog_cat_cancel_records+0x120/0x3c0 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc1891264&amp;gt;] llog_changelog_cancel_cb+0x104/0x2a0 [mdd]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e536df&amp;gt;] llog_process_thread+0x82f/0x18e0 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e5484c&amp;gt;] llog_process_or_fork+0xbc/0x450 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e59b49&amp;gt;] llog_cat_process_cb+0x239/0x250 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e536df&amp;gt;] llog_process_thread+0x82f/0x18e0 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e5484c&amp;gt;] llog_process_or_fork+0xbc/0x450 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e565b1&amp;gt;] llog_cat_process_or_fork+0x1e1/0x360 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc0e5675e&amp;gt;] llog_cat_process+0x2e/0x30 [obdclass]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc188da34&amp;gt;] llog_changelog_cancel.isra.16+0x54/0x1c0 [mdd]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc188fb60&amp;gt;] mdd_changelog_llog_cancel+0xd0/0x270 [mdd]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc1892c13&amp;gt;] mdd_changelog_clear+0x503/0x690 [mdd]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc1895d03&amp;gt;] mdd_iocontrol+0x163/0x540 [mdd]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc17179dc&amp;gt;] mdt_iocontrol+0x5ec/0xb00 [mdt]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc1718374&amp;gt;] mdt_set_info+0x484/0x490 [mdt]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc11e464a&amp;gt;] tgt_request_handle+0xada/0x1570 [ptlrpc]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc118743b&amp;gt;] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffc118ada4&amp;gt;] ptlrpc_main+0xb34/0x1470 [ptlrpc]
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffb04c2e81&amp;gt;] kthread+0xd1/0xe0
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffb0b77c24&amp;gt;] ret_from_fork_nospec_begin+0xe/0x21
Mar 24 06:42:49 fir-md1-s2 kernel:  [&amp;lt;ffffffffffffffff&amp;gt;] 0xffffffffffffffff
Mar 24 06:42:49 fir-md1-s2 kernel: LustreError: dumping log to /tmp/lustre-log.1585057369.22333
Mar 24 06:42:59 fir-md1-s2 kernel: Lustre: fir-MDT0001: Connection restored to eceee209-ec05-4 (at 10.50.6.54@o2ib2)
Mar 24 06:43:20 fir-md1-s2 kernel: LNet: Service thread pid 22333 completed after 233.43s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="322453" author="sergey" created="Wed, 12 Jan 2022 15:49:29 +0000"  >&lt;p&gt;I faced similar LBUG, see details in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-15444&quot; title=&quot;replay-single test_70c: LBUG in llog_osd_write_rec&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-15444&quot;&gt;LU-15444&lt;/a&gt;.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                                        </outwardlinks>
                                                                <inwardlinks description="is related to">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="34476" name="fir-md1-s2_console-changelogs_no_more_free_slots_2_12_4.log" size="25088" author="sthiell" created="Wed, 18 Mar 2020 18:40:48 +0000"/>
                            <attachment id="34473" name="mdt1_680157" size="16919609" author="sthiell" created="Wed, 18 Mar 2020 18:44:14 +0000"/>
                            <attachment id="34475" name="mdt1_changelog_catalog" size="6546931" author="sthiell" created="Wed, 18 Mar 2020 18:43:35 +0000"/>
                            <attachment id="34474" name="mdt1_changelog_users" size="222" author="sthiell" created="Wed, 18 Mar 2020 18:43:38 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00vsn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>