<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:48:37 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-11980] When attempting to enable changelogs system is locking up with &quot;no more free slots in catalog&quot;</title>
                <link>https://jira.whamcloud.com/browse/LU-11980</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We have been working with changelogs recently and on this particular file system when changelogs were enabled, getting changelog information was very slow when specifying a short range greater than the index. Because of the response, changelogs have been registered and deregister a number of time.&#160;&lt;/p&gt;

&lt;p&gt;Today we attempted to re-register changelogs and the current index never changed and after just a couple of minutes the system froze.&#160;&lt;/p&gt;

&lt;p&gt;One thing just occurred to me: Should we clear the changelogs before re-registering? We haven&apos;t done that.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;dmesg after a reboot (without deregistering changelogs) produced the following output prior to the hang:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:41 2019&amp;#93;&lt;/span&gt; LDISKFS-fs (dm-0): recovery complete&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:41 2019&amp;#93;&lt;/span&gt; LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:43 2019&amp;#93;&lt;/span&gt; Lustre: MGS: Connection restored to f81759a0-aecf-98f8-d1e0-a6cf70343cdd (at 0@lo)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:43 2019&amp;#93;&lt;/span&gt; LustreError: 137-5: gscratch-MDT0000_UUID: not available for connect from 192.168.163.154@o2ib8 (no target). If you are running an HA pair check that the target is mounted on the oth.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:43 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Not available for connect from 192.168.146.62@o2ib7 (not set up)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:43 2019&amp;#93;&lt;/span&gt; Lustre: MGS: Connection restored to (at 192.168.129.20@o2ib4)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:43 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 21 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:43 2019&amp;#93;&lt;/span&gt; sd 7:0:0:0: Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatical&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:44 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Not available for connect from 192.168.133.66@o2ib4 (not set up)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:44 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 24 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:44 2019&amp;#93;&lt;/span&gt; Lustre: MGS: Connection restored to (at 192.168.163.242@o2ib8)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:44 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 71 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:46 2019&amp;#93;&lt;/span&gt; Lustre: MGS: Connection restored to (at 192.168.128.111@o2ib4)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:46 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 144 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:50 2019&amp;#93;&lt;/span&gt; Lustre: MGS: Connection restored to 670522c1-3eaa-288f-b21f-556fb85bf7bb (at 192.168.81.28@tcp2)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:50 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 336 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:51 2019&amp;#93;&lt;/span&gt; Lustre: 5685:0:(llog_cat.c:956:llog_cat_reverse_process()) catalog 0x7:10 crosses index zero&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:51 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDD0000: changelog on&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:52 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Will be in recovery for at least 5:00, or until 6653 clients reconnect&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:58 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Connection restored to (at 192.168.146.61@o2ib7)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:42:58 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 1892 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:14 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Connection restored to 41d8e069-ae86-1be2-dc41-7835d4a0c892 (at 192.168.133.43@o2ib4)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:14 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 1630 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:46 2019&amp;#93;&lt;/span&gt; Lustre: MGS: Connection restored to c858508f-9415-a50e-e377-4c75ecc97e6a (at 192.168.160.40@o2ib8)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:46 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 7165 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:53 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:59. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:53 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:58. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:53 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 68 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:54 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:57. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:54 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 127 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:56 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:55. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:56 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 261 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:43:58 2019&amp;#93;&lt;/span&gt; LustreError: 5851:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880f9ad07500 x1623562148962904/t0(737400040385) o35-&amp;gt;c1db237f-2447-dcf4-1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:00 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:51. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:00 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 656 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:00 2019&amp;#93;&lt;/span&gt; LustreError: 5851:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880fe010ac50 x1623924373420932/t0(737400041101) o35-&amp;gt;28ad855d-72b3-7388-1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:02 2019&amp;#93;&lt;/span&gt; LustreError: 6378:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880f9a40ce00 x1623286549665444/t0(737400045945) o101-&amp;gt;ed31116f-3909-bacf1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:08 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:43. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:08 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 591 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:24 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 3:27. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:24 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 1055 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:32 2019&amp;#93;&lt;/span&gt; LustreError: 6261:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff8807c7e6a700 x1624190622706812/t0(737400044160) o101-&amp;gt;48ebd6c1-d5c4-15b01&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:32 2019&amp;#93;&lt;/span&gt; LustreError: 6261:0:(ldlm_lib.c:2778:target_queue_recovery_request()) Skipped 1 previous similar message&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:37 2019&amp;#93;&lt;/span&gt; LustreError: 5851:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880f99ce9800 x1623562475389600/t0(737400021311) o35-&amp;gt;f9a9f0dd-fe45-980e-1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:50 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Connection restored to 4f538c64-8ac4-c14d-4e39-050d775a27fb (at 192.168.145.204@o2ib7)&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:50 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 7950 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:56 2019&amp;#93;&lt;/span&gt; Lustre: gscratch-MDT0000: Recovery already passed deadline 2:55. If you do not want to wait more, please abort the recovery by force.&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:56 2019&amp;#93;&lt;/span&gt; Lustre: Skipped 3929 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:44:59 2019&amp;#93;&lt;/span&gt; LustreError: 5851:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff8800af8b2700 x1623562148962904/t0(737400040385) o35-&amp;gt;c1db237f-2447-dcf4-1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:45:33 2019&amp;#93;&lt;/span&gt; LustreError: 6313:0:(ldlm_lib.c:2778:target_queue_recovery_request()) @@@ dropping resent queued req req@ffff880f99236900 x1624190622706812/t0(737400044160) o101-&amp;gt;48ebd6c1-d5c4-15b01&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;Wed Feb 20 10:45:33 2019&amp;#93;&lt;/span&gt; LustreError: 6313:0:(ldlm_lib.c:2778:target_queue_recovery_request()) Skipped 3 previous similar messages&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Snippet from syslog:&lt;/p&gt;

&lt;p&gt;Feb 20 10:18:09 gmds1 dbus&lt;span class=&quot;error&quot;&gt;&amp;#91;3789&amp;#93;&lt;/span&gt;: &lt;span class=&quot;error&quot;&gt;&amp;#91;system&amp;#93;&lt;/span&gt; Successfully activated service &apos;org.freedesktop.problems&apos;&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.187993&amp;#93;&lt;/span&gt; Lustre: gscratch-MDD0000: changelog on&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.196897&amp;#93;&lt;/span&gt; Lustre: 7104:0:(llog_cat.c:93:llog_cat_new_log()) gscratch-MDD0000: there are no more free slots in catalog&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.210447&amp;#93;&lt;/span&gt; LustreError: 6714:0:(mdd_dir.c:887:mdd_changelog_ns_store()) gscratch-MDD0000: cannot store changelog record: type = 8, name = &apos;rest.colvars.state.old&apos;, t = [0&lt;br/&gt;
x200&lt;br/&gt;
1755d7:0xdc8:0x0], p = &lt;span class=&quot;error&quot;&gt;&amp;#91;0x200171e8a:0x12d2d:0x0&amp;#93;&lt;/span&gt;: rc = -28&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.221771&amp;#93;&lt;/span&gt; LustreError: 7227:0:(llog_cat.c:385:llog_cat_current_log()) gscratch-MDD0000: next log does not exist!&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.249353&amp;#93;&lt;/span&gt; LustreError: 6714:0:(mdd_dir.c:887:mdd_changelog_ns_store()) Skipped 3 previous similar messages&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.697900&amp;#93;&lt;/span&gt; Lustre: 7237:0:(llog_cat.c:93:llog_cat_new_log()) gscratch-MDD0000: there are no more free slots in catalog&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.711550&amp;#93;&lt;/span&gt; Lustre: 7237:0:(llog_cat.c:93:llog_cat_new_log()) Skipped 278 previous similar messages&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.723222&amp;#93;&lt;/span&gt; LustreError: 6711:0:(mdd_dir.c:887:mdd_changelog_ns_store()) gscratch-MDD0000: cannot store changelog record: type = 8, name = &apos;rest.colvars.state.old&apos;, t = [0x200&lt;br/&gt;
17748a:0x3684:0x0], p = &lt;span class=&quot;error&quot;&gt;&amp;#91;0x200171e8a:0x12c65:0x0&amp;#93;&lt;/span&gt;: rc = -28&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.749406&amp;#93;&lt;/span&gt; LustreError: 6711:0:(mdd_dir.c:887:mdd_changelog_ns_store()) Skipped 62 previous similar messages&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.814879&amp;#93;&lt;/span&gt; LustreError: 6909:0:(llog_cat.c:385:llog_cat_current_log()) gscratch-MDD0000: next log does not exist!&lt;br/&gt;
Feb 20 10:18:49 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070231.828077&amp;#93;&lt;/span&gt; LustreError: 6909:0:(llog_cat.c:385:llog_cat_current_log()) Skipped 43 previous similar messages&lt;br/&gt;
Feb 20 10:18:50 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070232.707459&amp;#93;&lt;/span&gt; Lustre: 6628:0:(llog_cat.c:93:llog_cat_new_log()) gscratch-MDD0000: there are no more free slots in catalog&lt;br/&gt;
Feb 20 10:18:50 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070232.721229&amp;#93;&lt;/span&gt; Lustre: 6628:0:(llog_cat.c:93:llog_cat_new_log()) Skipped 273 previous similar messages&lt;br/&gt;
Feb 20 10:18:51 gmds1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;2070232.998096&amp;#93;&lt;/span&gt; LustreError: 6711:0:(mdd_dir.c:887:mdd_changelog_ns_store()) gscratch-MDD0000: cannot store changelog recor&lt;/p&gt;</description>
                <environment>Dell R720 servers running TOSS 3.1-4.1 (RHEL 7.3)/Lustre 2.8.0.9 IB attached to DDN 7700.</environment>
        <key id="54930">LU-11980</key>
            <summary>When attempting to enable changelogs system is locking up with &quot;no more free slots in catalog&quot;</summary>
                <type id="9" iconUrl="https://jira.whamcloud.com/images/icons/issuetypes/undefined.png">Question/Request</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="10000">Done</resolution>
                                        <assignee username="tappro">Mikhail Pershin</assignee>
                                    <reporter username="jamervi">Joe Mervini</reporter>
                        <labels>
                    </labels>
                <created>Wed, 20 Feb 2019 18:05:39 +0000</created>
                <updated>Sun, 16 Jan 2022 08:35:33 +0000</updated>
                            <resolved>Sun, 16 Jan 2022 08:35:33 +0000</resolved>
                                    <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="242493" author="pjones" created="Fri, 22 Feb 2019 01:39:28 +0000"  >&lt;p&gt;Mike&lt;/p&gt;

&lt;p&gt;What do you advise here?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="242782" author="tappro" created="Tue, 26 Feb 2019 07:34:44 +0000"  >&lt;p&gt;Yes, you have to clear changelog manually on re-registration if you want such behavior, or it can be managed by policy engine, e.g. Robinhood. At the moment you need to purge changelog records. That can be done with &lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;lfs changelog_clear &amp;lt;mdtname&amp;gt; &amp;lt;user ID&amp;gt; 0&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt; command where &apos;user ID&apos; is registration ID, so register first if needed.&lt;/p&gt;</comment>
                            <comment id="243864" author="jamervi" created="Wed, 13 Mar 2019 20:00:48 +0000"  >&lt;p&gt;Just reviewing the manual, lctl changelog_deregister is supposed to clear the changelogs. That aside, right now when we attempt to reregister a changelog user the system will freeze before we can issue the lfs changelog_clear command.&#160; How do you suggest we proceed?&lt;/p&gt;</comment>
                            <comment id="243985" author="tappro" created="Fri, 15 Mar 2019 11:04:32 +0000"  >&lt;p&gt;Joe, are there client/server logs available at the moment when system is freezing? Also could you get the output of &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;lctl get_param &apos;mdd.*.changelog*&apos;&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;&#160;on the MDS?&lt;/p&gt;</comment>
                            <comment id="244130" author="jamervi" created="Mon, 18 Mar 2019 17:21:58 +0000"  >&lt;p&gt;Mikhail,&lt;/p&gt;

&lt;p&gt;We haven&apos;t made any attempts to reregister the changelog_user since the last time it froze because it is a production system. I won&apos;t have any client side logs but I&apos;ll look at what might be available on the server side.&lt;/p&gt;

&lt;p&gt;Here is the output you requested:&lt;/p&gt;

&lt;p&gt;MARK CREAT MKDIR HLINK SLINK MKNOD UNLNK RMDIR RENME RNMTO OPEN CLOSE LYOUT TRUNC SATTR XATTR HSM MTIME CTIME MIGRT mdd.gscratch-MDT0000.changelog_users=current index: 11390996174&lt;/p&gt;</comment>
                            <comment id="244872" author="tappro" created="Fri, 29 Mar 2019 05:32:56 +0000"  >&lt;p&gt;Joe, that means you have no current changelog users and changelogs are supposed to be cleared when last user de-registered but that didn&apos;t happen. If there will be opportunity to try registration again, maybe at maintenance time, could you try that again and collect lustre logs on server? Set maximum debug level before that with &lt;tt&gt;lctl set_param debug=-1&lt;/tt&gt;&lt;/p&gt;

&lt;p&gt;If you are unable to register user and clear logs manually then there is opportunity to do that during downtime/maintenance via the following steps:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;remount the MDT using the backing filesystem.&lt;/li&gt;
	&lt;li&gt;use &lt;tt&gt;llog_reader  &amp;lt;mountpoint&amp;gt;/changelog_catalog&lt;/tt&gt; to parse it and extract the &lt;tt&gt;path=PATH&lt;/tt&gt; components of the sublog records, and then remove all of them from fs.&lt;/li&gt;
	&lt;li&gt;remove changelog_catalog.&lt;/li&gt;
	&lt;li&gt;the same with changelog_users.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;This will remove all changelogs and it will be recreated upon server start.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>Lustre-2.8.0</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|i00bxb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>