<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:39:15 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4054] HSM llog_handle leak</title>
                <link>https://jira.whamcloud.com/browse/LU-4054</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Running sanity-hsm I see that we&apos;re leaking a log_handle. You can see it by mounting, running full sanity-hsm, then unmounting or by mounting, running 107 222a 222b 224 226 250, then unmounting.&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# export OSTSIZE=1000000
# export agt1_HOST=t
# export PTLDEBUG=&quot;vfstrace rpctrace dlmtrace neterror ha config ioctl super hsm other malloc info&quot;
# export DEBUG_SIZE=2048

# dir=/tmp/hsm-leak.$$
# mkdir $direxport OSTSIZE=1000000
# export agt1_HOST=t
# export PTLDEBUG=&quot;vfstrace rpctrace dlmtrace neterror ha config ioctl super hsm other malloc info&quot;
# export DEBUG_SIZE=2048     
# dir=/tmp/hsm-leak.$$
# mkdir $dir || exit 1
# echo $dir
# rm -rf /tmp/debug-leak.* /tmp/lustre-log.* /tmp/sanity-hsm.log /tmp/test_logs/
# export ONLY=&quot;107 222a 222b 224 226 250&quot;
# dmesg -c &amp;gt; /dev/null
# llmount.sh
# sh ~/lustre-release/lustre/tests/sanity-hsm.sh
# llmountcleanup.sh
# dmesg &amp;gt; $dir/dmesg
# mv /tmp/debug-leak.* /tmp/lustre-log.* /tmp/sanity-hsm.log /tmp/test_logs/ $dir/
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Stopping clients: t /mnt/lustre (opts:)
Stopping clients: t /mnt/lustre2 (opts:)

...
only running test 107 222a 222b 224 226 250
excepting tests: 31a 34 35 36 200 201 221 223a 223b 225
Killing existing copytools on t
Set HSM on and start
Changed after 0s: from &apos;&apos; to &apos;stopped&apos;
Waiting 20 secs for update
Updated after 7s: wanted &apos;enabled&apos; got &apos;enabled&apos;
Start copytool
Purging archive on t
Starting copytool agt1 on t
Set sanity-hsm HSM policy

== sanity-hsm test 107: Copytool re-register after MDS restart == 16:48:19 (1380750499)
Wakeup copytool agt1 on t
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
Failing mds1 on t
Stopping /mnt/mds1 (opts:) on t
reboot facets: mds1
Failover mds1 to t
16:48:31 (1380750511) waiting for t network 900 secs ...
16:48:31 (1380750511) network interface is UP
mount facets: mds1
Starting mds1:   -o loop /tmp/lustre-mdt1 /mnt/mds1
Started lustre-MDT0000
t: mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
Copytool is stopped on t
Resetting fail_loc on all nodes...done.
PASS 107 (16s)


== sanity-hsm test 222a: Changelog for explicit restore == 16:48:35 (1380750515)
Purging archive on t
Starting copytool agt1 on t
lhsmtool_posix[18491]: action=1 src=d0.sanity-hsm/d222/f.sanity-hsm.222a dst=/mnt/lustre/d0.sanity-hsm/d222/f.sanity-hsm.222a mount_point=/mnt/lustre
lhsmtool_posix[18491]: importing &apos;/mnt/lustre/d0.sanity-hsm/d222/f.sanity-hsm.222a&apos; from &apos;/tmp/arc1/d0.sanity-hsm/d222/f.sanity-hsm.222a&apos;
lhsmtool_posix[18491]: imported &apos;/mnt/lustre/d0.sanity-hsm/d222/f.sanity-hsm.222a&apos; from &apos;/tmp/arc1/0006/0000/0400/0000/0002/0000/0x200000400:0x6:0x0&apos;==&apos;/tmp/arc1/d0.sanity-hsm/d222/f.sanity-hsm.222a&apos;
lhsmtool_posix[18491]: process finished, errs: 0 major, 0 minor, rc=0 (Success)
mdd.lustre-MDT0000.changelog_mask=+hsm
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
Copytool is stopped on t
lustre-MDT0000: Deregistered changelog user &apos;cl1&apos;
Resetting fail_loc on all nodes...done.
PASS 222a (3s)

== sanity-hsm test 222b: Changelog for implicit restore == 16:48:38 (1380750518)
Purging archive on t
Starting copytool agt1 on t
mdd.lustre-MDT0000.changelog_mask=+hsm
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
8e700881220db1269deb14847f7ffd4d  /mnt/lustre/d0.sanity-hsm/d222/f.sanity-hsm.222b
Copytool is stopped on t
lustre-MDT0000: Deregistered changelog user &apos;cl2&apos;
Resetting fail_loc on all nodes...done.
PASS 222b (2s)


== sanity-hsm test 224: Changelog for remove == 16:48:40 (1380750520)
Purging archive on t
Starting copytool agt1 on t
mdd.lustre-MDT0000.changelog_mask=+hsm
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
Copytool is stopped on t
lustre-MDT0000: Deregistered changelog user &apos;cl3&apos;
Resetting fail_loc on all nodes...done.
PASS 224 (3s)


== sanity-hsm test 226: changelog for last rm/mv with exiting archive == 16:48:43 (1380750523)
Purging archive on t
Starting copytool agt1 on t
0x200000401:0x7:0x0
mdd.lustre-MDT0000.changelog_mask=+hsm
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
Changed after 0s: from &apos;&apos; to &apos;STARTED&apos;
Waiting 100 secs for update
Copytool is stopped on t
lustre-MDT0000: Deregistered changelog user &apos;cl4&apos;
Resetting fail_loc on all nodes...done.
PASS 226 (4s)


== sanity-hsm test 250: Coordinator max request == 16:48:47 (1380750527)
Purging archive on t
Starting copytool agt1 on t
mdt.lustre-MDT0000.hsm_control=disabled
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.87602 s, 5.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.69116 s, 6.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.75049 s, 6.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.83329 s, 5.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.66484 s, 6.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.65743 s, 6.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.65997 s, 6.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.65964 s, 6.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.65768 s, 6.3 MB/s
mdt.lustre-MDT0000.hsm_control=enabled
max=3 started=2 waiting=6
...
max=3 started=0 waiting=0
Copytool is stopped on t
Resetting fail_loc on all nodes...done.
PASS 250 (107s)
== sanity-hsm test complete, duration 145 sec == 16:50:34 (1380750634)
Stopping clients: t /mnt/lustre2 (opts:)
Stopping client t /mnt/lustre2 opts:
Stopping clients: t /mnt/lustre (opts:-f)
Stopping client t /mnt/lustre opts:-f
Stopping clients: t /mnt/lustre2 (opts:-f)
Stopping /mnt/mds1 (opts:-f) on t
Stopping /mnt/ost1 (opts:-f) on t
Stopping /mnt/ost2 (opts:-f) on t
LustreError: 21879:0:(class_obd.c:730:cleanup_obdclass()) obd_memory max: 70547704, leaked: 8376
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# cd /tmp/hsm-leak.*
# perl /root/lustre-release/lustre/tests/leak_finder.pl debug-leak.* 2&amp;gt;&amp;amp;1 | grep Leak
*** Leak: 184 bytes allocated at ffff88019d691840 (llog.c:llog_alloc_handle:66, debug file line 77331)
*** Leak: 8192 bytes allocated at ffff88018e8c2000 (llog.c:llog_init_handle:215, debug file line 77334)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="21246">LU-4054</key>
            <summary>HSM llog_handle leak</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="jhammond">John Hammond</assignee>
                                    <reporter username="jhammond">John Hammond</reporter>
                        <labels>
                            <label>HSM</label>
                    </labels>
                <created>Thu, 3 Oct 2013 01:32:48 +0000</created>
                <updated>Fri, 4 Oct 2013 15:33:05 +0000</updated>
                            <resolved>Fri, 4 Oct 2013 15:32:52 +0000</resolved>
                                    <version>Lustre 2.5.0</version>
                                    <fixVersion>Lustre 2.5.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>1</watches>
                                                                            <comments>
                            <comment id="68322" author="jhammond" created="Thu, 3 Oct 2013 21:10:15 +0000"  >&lt;p&gt;Please see &lt;a href=&quot;http://review.whamcloud.com/7847&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/7847&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="68373" author="jhammond" created="Fri, 4 Oct 2013 15:32:52 +0000"  >&lt;p&gt;Patch landed to master.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw4p3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>10871</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>