<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:28:16 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2795] WRF runs causing Lustre clients to lose memory</title>
                <link>https://jira.whamcloud.com/browse/LU-2795</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;At our center, we are running a Lustre 2.1.2 file system with Lustre 2.1.2 clients on all of the compute nodes of our Penguin cluster. Recently, a user has been performing WRF runs where he uses a special feature of WRF to offload all of the I/O onto a single node, which improves his I/O performance dramatically, but results in the node losing ~1 GB of memory to &quot;Inactive&quot; after each run. In our epilogue, we have a script checking for available free memory above a specified percentage, and every job that this user runs results in the node being set to offline due to this 1 GB of Inactive memory. &lt;/p&gt;

&lt;p&gt;Here is an example of the output from drop_caches showing before and after the epilogue starts on one of these nodes: &lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;Before:&lt;/li&gt;
	&lt;li&gt;MemTotal: 15.681 GB&lt;/li&gt;
	&lt;li&gt;MemFree: 6.495 GB&lt;/li&gt;
	&lt;li&gt;Cached: 6.206 GB&lt;/li&gt;
	&lt;li&gt;Active: 1.395 GB&lt;/li&gt;
	&lt;li&gt;Inactive: 6.247 GB&lt;/li&gt;
	&lt;li&gt;Dirty: 0.000 GB&lt;/li&gt;
	&lt;li&gt;Mapped: 0.003 GB&lt;/li&gt;
	&lt;li&gt;Slab: 1.391 GB&lt;/li&gt;
	&lt;li&gt;After:&lt;/li&gt;
	&lt;li&gt;MemTotal: 15.681 GB&lt;/li&gt;
	&lt;li&gt;MemFree: 14.003 GB&lt;/li&gt;
	&lt;li&gt;Cached: 0.007 GB&lt;/li&gt;
	&lt;li&gt;Active: 0.134 GB&lt;/li&gt;
	&lt;li&gt;Inactive: 1.309 GB&lt;/li&gt;
	&lt;li&gt;Dirty: 0.000 GB&lt;/li&gt;
	&lt;li&gt;Mapped: 0.003 GB&lt;/li&gt;
	&lt;li&gt;Slab: 0.082 GB&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;While looking for possible solutions to this problem, I stumbled upon a recent HPDD-Discuss question that was entitled &quot;Possible file page leak in Lustre 2.1.2&quot; which was very similar to our problem. It was suggested that the issue had already been discovered and resolved in &lt;a href=&quot;http://jira.whamcloud.com/browse/LU-1576&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;http://jira.whamcloud.com/browse/LU-1576&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This ticket suggests that the resolution was included as part of Lustre 2.1.3, so we tested this by installing the Lustre 2.1.3 client packages on some of our compute nodes and allowing the WRF job to run on these nodes. However, even after the upgrade to Lustre 2.1.3, we still saw the inactive memory at the end of the job. Do we need to upgrade our Lustre installation on the OSSes and MDS to Lustre 2.1.3 to fix this problem, or do you have any other suggestions? &lt;/p&gt;

&lt;p&gt;Any help that you could provide us with would be appreciated! &lt;/p&gt;</description>
                <environment></environment>
        <key id="17529">LU-2795</key>
            <summary>WRF runs causing Lustre clients to lose memory</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="6">Not a Bug</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="adizon">Archie Dizon</reporter>
                        <labels>
                    </labels>
                <created>Mon, 11 Feb 2013 15:59:24 +0000</created>
                <updated>Tue, 10 Apr 2018 17:11:07 +0000</updated>
                            <resolved>Tue, 10 Apr 2018 17:11:07 +0000</resolved>
                                                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="52179" author="cliffw" created="Mon, 11 Feb 2013 18:18:49 +0000"  >&lt;p&gt; It is possible you are seeing uncommitted writes to the OSTs - if you wait 5 - 10  minutes, can the cache be cleaned? It is possible you are seeing a leak, however this does not appear to match &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1576&quot; title=&quot;client sluggish after running lpurge&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1576&quot;&gt;&lt;del&gt;LU-1576&lt;/del&gt;&lt;/a&gt;. The &apos;slabtop&apos; tool may provide some additional data on memory consumption.&lt;br/&gt;
If this is a leak, is there any way you can run this on a single node, and/or provide us with the workload? We may need to reproduce this one locally to fix the issue. &lt;/p&gt;</comment>
                            <comment id="52490" author="cliffw" created="Fri, 15 Feb 2013 17:30:08 +0000"  >&lt;p&gt;Can you update us on your status?&lt;/p&gt;</comment>
                            <comment id="52496" author="adizon" created="Fri, 15 Feb 2013 18:03:19 +0000"  >&lt;p&gt;In regards to the question of waiting for a few minutes, the answer is no.&lt;br/&gt;
Even if we wait for hours, the inactive memory is never given back to the&lt;br/&gt;
system, we are forced to reboot these nodes to return them with their full&lt;br/&gt;
memory again. However, as you can see from the output in my last message,&lt;br/&gt;
we start off with &amp;gt; 6 GB of inactive memory at the beginning of the&lt;br/&gt;
epilogue and ~ 1 GB of inactive memory after the epilogue has waited&lt;br/&gt;
approximately 30 seconds. Although, no matter how long we wait, that 1 GB&lt;br/&gt;
of  memory is never returned to the system&lt;/p&gt;

&lt;p&gt;We had planned to set up a run of WRF to test the memory usage on our test&lt;br/&gt;
cluster, but this has gotten delayed as all of us were busy during the&lt;br/&gt;
week. We will have to wait until next week to get you some data on memory&lt;br/&gt;
usage.&lt;/p&gt;

&lt;p&gt;Having talked with someone much more familiar with WRF and its dependencies&lt;br/&gt;
than myself, it sounds like running the WRF software the way that is being&lt;br/&gt;
done here, it may be a fairly big hassle. In other words, getting it&lt;br/&gt;
running for you locally may be fairly difficult. We will have to see if&lt;br/&gt;
going down that road is necessary when we give you some more data.&lt;/p&gt;

&lt;p&gt;In the meantime, I&apos;m curious as to how WhamCloud has determined that our&lt;br/&gt;
problem does not match up with &lt;a href=&quot;http://jira.whamcloud.com/browse/LU-1576&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;http://jira.whamcloud.com/browse/LU-1576&lt;/a&gt;.&lt;br/&gt;
The symptoms are identical, and it was suggested in the HPDD discussion&lt;br/&gt;
list that this was an occurrence in Lustre 2.1.2 for some irregular I/O&lt;br/&gt;
patterns. What do they see as different between our problem and the one&lt;br/&gt;
described by LLNL on the list? For my future reference, I would be&lt;br/&gt;
interested to know how they determined that so I could use their methods&lt;br/&gt;
for better diagnosing Lustre problems in the future.&lt;/p&gt;

&lt;p&gt;I&apos;ll have more to share with you next week.&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;</comment>
                            <comment id="52503" author="cliffw" created="Fri, 15 Feb 2013 18:45:10 +0000"  >&lt;p&gt;You indicated that you had installed 2.1.3, which contains the fix for &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1576&quot; title=&quot;client sluggish after running lpurge&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1576&quot;&gt;&lt;del&gt;LU-1576&lt;/del&gt;&lt;/a&gt;, this was our main indication. The &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1576&quot; title=&quot;client sluggish after running lpurge&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1576&quot;&gt;&lt;del&gt;LU-1576&lt;/del&gt;&lt;/a&gt; fix mostly deals with readdir pages, so unless your workload includes a lot of readdirs your have likely a different problem. &lt;/p&gt;

&lt;p&gt;Are you saying that dropping cache does not free the 1GB of memory? &lt;/p&gt;</comment>
                            <comment id="52698" author="adizon" created="Tue, 19 Feb 2013 13:28:53 +0000"  >&lt;p&gt;Yes, we had tested installing 2.1.3 on a couple of our client systems to&lt;br/&gt;
see if that would fix the problem, but we were still seeing the issue on&lt;br/&gt;
those nodes with the Lustre 2.1.3 client installed on them. Thanks for&lt;br/&gt;
clarifying that, and it doesn&apos;t appear that this code would&lt;br/&gt;
be performing a great deal of readdirs, probably not the same memory leak.&lt;/p&gt;

&lt;p&gt;Correct, dropping cache does not free the 1 GB of memory. Our epilogue&lt;br/&gt;
script attempts to drop cache twice, and after the second time it compares&lt;br/&gt;
the amount of free memory before determining if it can return the compute&lt;br/&gt;
node to service.&lt;/p&gt;

&lt;p&gt;We are going to run the WRF job with Lustre at a higher logging level and&lt;br/&gt;
using the leak_finder.pl script provided by WhamCloud. We will send&lt;br/&gt;
whatever we find along to you.&lt;/p&gt;</comment>
                            <comment id="52704" author="cliffw" created="Tue, 19 Feb 2013 14:22:41 +0000"  >&lt;p&gt;Thanks, let us know how it goes.&lt;/p&gt;</comment>
                            <comment id="52765" author="adizon" created="Wed, 20 Feb 2013 15:50:45 +0000"  >&lt;p&gt;Customer ran there WRF job with the Lustre debugging set to gather malloc information, and it does appear that we have found a leak in Lustre. Here were the steps we followed:&lt;br/&gt;
1) sudo lctl set_param debug=+malloc&lt;br/&gt;
2) sudo lctl set_param debug_mb=512&lt;br/&gt;
3) * let the WRF job run *&lt;br/&gt;
    Epilogue sets the node offline (1.62 GB of memory set to inactive)&lt;br/&gt;
4) sudo lctl dk /tmp/lustre_debug&lt;br/&gt;
5) perl leak_finder.pl /tmp/lustre_debug 2&amp;gt;&amp;amp;1 | grep &quot;Leak&quot;&lt;/p&gt;

&lt;p&gt;From that last command, here is what we found:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;
	&lt;ul&gt;
		&lt;li&gt;
		&lt;ul&gt;
			&lt;li&gt;Leak: 1080 bytes allocated at ffff8101d5eae140 (super25.c:ll_alloc_inode:56, debug file line 1745506)&lt;/li&gt;
			&lt;li&gt;Leak: 104 bytes allocated at ffff8101cecdbdc0 (dcache.c:ll_set_dd:192, debug file line 1745508)&lt;/li&gt;
			&lt;li&gt;Leak: 1080 bytes allocated at ffff810214eb3ac0 (super25.c:ll_alloc_inode:56, debug file line 1745551)&lt;/li&gt;
			&lt;li&gt;Leak: 104 bytes allocated at ffff8101d7523840 (dcache.c:ll_set_dd:192, debug file line 1745553)&lt;/li&gt;
		&lt;/ul&gt;
		&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;The Lustre documentation states that this is a cyclical log so I would imagine that if there was a small leak shown here, that throughout the run small amounts of memory could have been lost, resulting in our overall large leak.&lt;/p&gt;

&lt;p&gt;We will attach the lustre_debug log to this case for you to analyze as well. It does look as though we may be narrowing down on the problem now though.&lt;/p&gt;


&lt;p&gt;Additionally, I am going to attach the /proc/slabinfo for the end of the&lt;br/&gt;
WRF run as you had previously requested, along with the /proc/meminfo&lt;br/&gt;
before, during, and after the WRF run.&lt;/p&gt;

&lt;p&gt;cat /proc/slabinfo&lt;br/&gt;
slabinfo - version: 2.1&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;name &amp;lt;active_objs&amp;gt; &amp;lt;num_objs&amp;gt; &amp;lt;objsize&amp;gt; &amp;lt;objperslab&amp;gt;&lt;br/&gt;
&amp;lt;pagesperslab&amp;gt; : tunables &amp;lt;limit&amp;gt; &amp;lt;batchcount&amp;gt; &amp;lt;sharedfactor&amp;gt; : slabdata&lt;br/&gt;
&amp;lt;active_slabs&amp;gt; &amp;lt;num_slabs&amp;gt; &amp;lt;sharedavail&amp;gt;&lt;br/&gt;
ll_qunit_cache 0 0 112 34 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lmv_objects 0 0 96 40 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ccc_req_kmem 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ccc_session_kmem 105 132 176 22 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 6 6 0&lt;br/&gt;
ccc_thread_kmem 112 121 336 11 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 11 11 0&lt;br/&gt;
ccc_object_kmem 0 0 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ccc_lock_kmem 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
vvp_session_kmem 105 148 104 37 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
vvp_thread_kmem 112 126 440 9 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 14 14 0&lt;br/&gt;
vvp_page_kmem 0 0 80 48 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ll_rmtperm_hash_cache 0 0 256 15 1 : tunables 120 60&lt;br/&gt;
8 : slabdata 0 0 0&lt;br/&gt;
ll_remote_perm_cache 0 0 40 92 1 : tunables 120 60&lt;br/&gt;
8 : slabdata 0 0 0&lt;br/&gt;
ll_file_data 0 0 192 20 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lustre_inode_cache 3 21 1088 7 2 : tunables 24 12 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
lov_oinfo 0 0 320 12 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lov_lock_link_kmem 0 0 32 112 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lovsub_req_kmem 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lovsub_object_kmem 0 0 240 16 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lovsub_lock_kmem 0 0 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lovsub_page_kmem 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lov_req_kmem 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lov_session_kmem 105 120 384 10 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 12 12 0&lt;br/&gt;
lov_thread_kmem 112 121 336 11 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 11 11 0&lt;br/&gt;
lov_object_kmem 0 0 200 19 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lov_lock_kmem 0 0 104 37 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
lov_page_kmem 0 0 48 77 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
osc_req_kmem 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
osc_session_kmem 105 130 296 13 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 10 10 0&lt;br/&gt;
osc_thread_kmem 112 126 216 18 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 7 7 0&lt;br/&gt;
osc_object_kmem 0 0 136 28 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
osc_lock_kmem 0 0 184 21 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
osc_page_kmem 0 0 264 15 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
llcd_cache 0 0 3952 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
interval_node 22 90 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
ldlm_locks 43 63 576 7 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 9 9 0&lt;br/&gt;
ldlm_resources 41 72 320 12 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 6 6 0&lt;br/&gt;
cl_page_kmem 0 0 184 21 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
cl_lock_kmem 0 0 216 18 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
cl_env_kmem 105 132 176 22 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 6 6 0&lt;br/&gt;
capa_cache 0 0 184 21 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ll_import_cache 0 0 1424 5 2 : tunables 24 12 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ll_obdo_cache 0 0 208 19 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ll_obd_dev_cache 17 17 7048 1 2 : tunables 8 4 0&lt;br/&gt;
: slabdata 17 17 0&lt;br/&gt;
SDP 0 0 1792 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
fib6_nodes 7 118 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
ip6_dst_cache 7 36 320 12 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
ndisc_cache 1 15 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
RAWv6 11 12 960 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
UDPv6 0 0 896 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
tw_sock_TCPv6 0 0 192 20 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
request_sock_TCPv6 0 0 192 20 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
TCPv6 0 0 1728 4 2 : tunables 24 12 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
nfs_direct_cache 0 0 136 28 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
nfs_write_data 36 36 832 9 2 : tunables 54 27 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
nfs_read_data 32 36 832 9 2 : tunables 54 27 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
nfs_inode_cache 123 195 1032 3 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 65 65 0&lt;br/&gt;
nfs_page 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
rpc_buffers 8 8 2048 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
rpc_tasks 20 20 384 10 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
rpc_inode_cache 30 30 768 5 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 6 6 0&lt;br/&gt;
scsi_cmd_cache 5 10 384 10 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 1 1 2&lt;br/&gt;
sgpool-128 32 32 4096 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 32 32 0&lt;br/&gt;
sgpool-64 32 32 2048 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 16 16 0&lt;br/&gt;
sgpool-32 32 32 1024 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 8 8 0&lt;br/&gt;
sgpool-16 32 32 512 8 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
sgpool-8 32 60 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 3 4 0&lt;br/&gt;
scsi_io_context 0 0 112 34 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ib_mad 2048 2296 448 8 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 287 287 0&lt;br/&gt;
ip_fib_alias 14 59 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
ip_fib_hash 14 59 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
UNIX 9 33 704 11 2 : tunables 54 27 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
flow_cache 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
msi_cache 9 59 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
cfq_ioc_pool 13 60 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
cfq_pool 11 54 216 18 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
crq_pool 4 96 80 48 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 2 0&lt;br/&gt;
deadline_drq 0 0 80 48 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
as_arq 0 0 96 40 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
mqueue_inode_cache 1 4 896 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
isofs_inode_cache 0 0 608 6 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
hugetlbfs_inode_cache 1 7 576 7 1 : tunables 54 27&lt;br/&gt;
8 : slabdata 1 1 0&lt;br/&gt;
ext2_inode_cache 91 145 720 5 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 29 29 0&lt;br/&gt;
ext2_xattr 0 0 88 44 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
dnotify_cache 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
dquot 0 0 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
eventpoll_pwq 5 106 72 53 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
eventpoll_epi 5 40 192 20 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
inotify_event_cache 0 0 40 92 1 : tunables 120 60&lt;br/&gt;
8 : slabdata 0 0 0&lt;br/&gt;
inotify_watch_cache 0 0 72 53 1 : tunables 120 60&lt;br/&gt;
8 : slabdata 0 0 0&lt;br/&gt;
kioctx 0 0 320 12 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
kiocb 0 0 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
fasync_cache 0 0 24 144 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
shmem_inode_cache 1360 1370 768 5 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 274 274 0&lt;br/&gt;
posix_timers_cache 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
uid_cache 2 30 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
tcp_bind_bucket 28 448 32 112 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
inet_peer_cache 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
secpath_cache 0 0 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
xfrm_dst_cache 0 0 384 10 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
ip_dst_cache 107 180 384 10 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 18 18 0&lt;br/&gt;
arp_cache 53 75 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 5 5 0&lt;br/&gt;
RAW 9 10 768 5 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
UDP 10 15 768 5 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
tw_sock_TCP 20 40 192 20 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 2 0&lt;br/&gt;
request_sock_TCP 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
TCP 32 35 1600 5 2 : tunables 24 12 8&lt;br/&gt;
: slabdata 7 7 0&lt;br/&gt;
blkdev_ioc 13 118 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
blkdev_queue 17 20 1576 5 2 : tunables 24 12 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
blkdev_requests 7 14 272 14 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 1 1 2&lt;br/&gt;
biovec-256 7 7 4096 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 7 7 0&lt;br/&gt;
biovec-128 7 8 2048 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
biovec-64 7 8 1024 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
biovec-16 7 30 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
biovec-4 7 118 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
biovec-1 7 404 16 202 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
bio 262 300 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 10 10 2&lt;br/&gt;
utrace_engine_cache 0 0 64 59 1 : tunables 120 60&lt;br/&gt;
8 : slabdata 0 0 0&lt;br/&gt;
utrace_cache 0 0 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
sock_inode_cache 90 108 640 6 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 18 18 0&lt;br/&gt;
skbuff_fclone_cache 14 14 512 7 1 : tunables 54 27&lt;br/&gt;
8 : slabdata 2 2 0&lt;br/&gt;
skbuff_head_cache 2847 3060 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 204 204 0&lt;br/&gt;
file_lock_cache 1 22 176 22 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
Acpi-Operand 1848 2360 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 40 40 0&lt;br/&gt;
Acpi-ParseExt 0 0 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
Acpi-Parse 0 0 40 92 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
Acpi-State 0 0 80 48 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
Acpi-Namespace 839 896 32 112 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 8 8 0&lt;br/&gt;
delayacct_cache 379 531 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 9 9 0&lt;br/&gt;
taskstats_cache 19 53 72 53 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
proc_inode_cache 146 180 592 6 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 30 30 0&lt;br/&gt;
sigqueue 53 96 160 24 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
radix_tree_node 9320 15316 536 7 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 2188 2188 0&lt;br/&gt;
bdev_cache 6 12 832 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
sysfs_dir_cache 5366 5412 88 44 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 123 123 0&lt;br/&gt;
mnt_cache 42 60 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 4 4 0&lt;br/&gt;
inode_cache 1231 1274 560 7 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 182 182 0&lt;br/&gt;
dentry_cache 3139 4140 216 18 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 230 230 0&lt;br/&gt;
filp 200 570 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 38 38 0&lt;br/&gt;
names_cache 9 9 4096 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 9 9 0&lt;br/&gt;
avc_node 30 106 72 53 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
selinux_inode_security 3124 4032 80 48 1 : tunables 120 60&lt;br/&gt;
8 : slabdata 84 84 0&lt;br/&gt;
key_jar 4 20 192 20 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 1 1 0&lt;br/&gt;
idr_layer_cache 199 238 528 7 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 34 34 0&lt;br/&gt;
buffer_head 148 320 96 40 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 8 8 0&lt;br/&gt;
mm_struct 24 32 896 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 8 8 0&lt;br/&gt;
vm_area_struct 428 1430 176 22 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 65 65 1&lt;br/&gt;
fs_cache 50 177 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
files_cache 35 60 768 5 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 12 12 0&lt;br/&gt;
signal_cache 367 378 832 9 2 : tunables 54 27 8&lt;br/&gt;
: slabdata 42 42 0&lt;br/&gt;
sighand_cache 357 360 2112 3 2 : tunables 24 12 8&lt;br/&gt;
: slabdata 120 120 0&lt;br/&gt;
task_struct 368 370 1920 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 185 185 0&lt;br/&gt;
anon_vma 294 1008 24 144 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 7 7 0&lt;br/&gt;
pid 393 531 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 9 9 0&lt;br/&gt;
shared_policy_node 0 0 48 77 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
numa_policy 72 432 24 144 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 3 3 0&lt;br/&gt;
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-131072 2 2 131072 1 32 : tunables 8 4 0&lt;br/&gt;
: slabdata 2 2 0&lt;br/&gt;
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-65536 6 6 65536 1 16 : tunables 8 4 0&lt;br/&gt;
: slabdata 6 6 0&lt;br/&gt;
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-32768 7 7 32768 1 8 : tunables 8 4 0&lt;br/&gt;
: slabdata 7 7 0&lt;br/&gt;
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-16384 2070 2070 16384 1 4 : tunables 8 4 0&lt;br/&gt;
: slabdata 2070 2070 0&lt;br/&gt;
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-8192 2026 2026 8192 1 2 : tunables 8 4 0&lt;br/&gt;
: slabdata 2026 2026 0&lt;br/&gt;
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-4096 911 911 4096 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 911 911 0&lt;br/&gt;
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-2048 1080 1120 2048 2 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 560 560 83&lt;br/&gt;
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-1024 1429 1756 1024 4 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 439 439 83&lt;br/&gt;
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-512 1607 2024 512 8 1 : tunables 54 27 8&lt;br/&gt;
: slabdata 253 253 2&lt;br/&gt;
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-256 3144 3495 256 15 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 233 233 0&lt;br/&gt;
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-64 8683 22243 64 59 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 377 377 0&lt;br/&gt;
size-32(DMA) 0 0 32 112 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 0 0 0&lt;br/&gt;
size-128 3423 7410 128 30 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 247 247 1&lt;br/&gt;
size-32 54883 59024 32 112 1 : tunables 120 60 8&lt;br/&gt;
: slabdata 527 527 0&lt;br/&gt;
kmem_cache 182 182 2688 1 1 : tunables 24 12 8&lt;br/&gt;
: slabdata 182 182 0&lt;/li&gt;
&lt;/ol&gt;



&lt;p&gt;cat /proc/meminfo &amp;lt;before job begins&amp;gt;&lt;/p&gt;

&lt;p&gt;MemTotal: 16442916 kB&lt;br/&gt;
MemFree: 15650204 kB&lt;br/&gt;
Buffers: 200 kB&lt;br/&gt;
Cached: 303428 kB&lt;br/&gt;
SwapCached: 0 kB&lt;br/&gt;
Active: 331180 kB&lt;br/&gt;
Inactive: 206008 kB&lt;br/&gt;
HighTotal: 0 kB&lt;br/&gt;
HighFree: 0 kB&lt;br/&gt;
LowTotal: 16442916 kB&lt;br/&gt;
LowFree: 15650204 kB&lt;br/&gt;
SwapTotal: 4225084 kB&lt;br/&gt;
SwapFree: 4225084 kB&lt;br/&gt;
Dirty: 0 kB&lt;br/&gt;
Writeback: 0 kB&lt;br/&gt;
AnonPages: 235168 kB&lt;br/&gt;
Mapped: 8452 kB&lt;br/&gt;
Slab: 77148 kB&lt;br/&gt;
PageTables: 2740 kB&lt;br/&gt;
NFS_Unstable: 0 kB&lt;br/&gt;
Bounce: 0 kB&lt;br/&gt;
CommitLimit: 12446540 kB&lt;br/&gt;
Committed_AS: 857620 kB&lt;br/&gt;
VmallocTotal: 34359738367 kB&lt;br/&gt;
VmallocUsed: 80412 kB&lt;br/&gt;
VmallocChunk: 34359657895 kB&lt;br/&gt;
HugePages_Total: 0&lt;br/&gt;
HugePages_Free: 0&lt;br/&gt;
HugePages_Rsvd: 0&lt;br/&gt;
Hugepagesize: 2048 kB&lt;/p&gt;


&lt;p&gt;cat /proc/meminfo &amp;lt;during the WRF run&amp;gt;&lt;br/&gt;
MemTotal: 16442916 kB&lt;br/&gt;
MemFree: 360168 kB&lt;br/&gt;
Buffers: 160 kB&lt;br/&gt;
Cached: 5678292 kB&lt;br/&gt;
SwapCached: 2230028 kB&lt;br/&gt;
Active: 8199640 kB&lt;br/&gt;
Inactive: 6118380 kB&lt;br/&gt;
HighTotal: 0 kB&lt;br/&gt;
HighFree: 0 kB&lt;br/&gt;
LowTotal: 16442916 kB&lt;br/&gt;
LowFree: 360168 kB&lt;br/&gt;
SwapTotal: 4225084 kB&lt;br/&gt;
SwapFree: 1557472 kB&lt;br/&gt;
Dirty: 9940 kB&lt;br/&gt;
Writeback: 7072 kB&lt;br/&gt;
AnonPages: 6545728 kB&lt;br/&gt;
Mapped: 9612 kB&lt;br/&gt;
Slab: 1349160 kB&lt;br/&gt;
PageTables: 19704 kB&lt;br/&gt;
NFS_Unstable: 0 kB&lt;br/&gt;
Bounce: 0 kB&lt;br/&gt;
CommitLimit: 12446540 kB&lt;br/&gt;
Committed_AS: 9478760 kB&lt;br/&gt;
VmallocTotal: 34359738367 kB&lt;br/&gt;
VmallocUsed: 80412 kB&lt;br/&gt;
VmallocChunk: 34359657895 kB&lt;br/&gt;
HugePages_Total: 0&lt;br/&gt;
HugePages_Free: 0&lt;br/&gt;
HugePages_Rsvd: 0&lt;br/&gt;
Hugepagesize: 2048 kB&lt;/p&gt;


&lt;p&gt;cat /proc/meminfo &amp;lt;after the WRF run&amp;gt;&lt;br/&gt;
MemTotal: 16442916 kB&lt;br/&gt;
MemFree: 14180204 kB&lt;br/&gt;
Buffers: 208 kB&lt;br/&gt;
Cached: 14928 kB&lt;br/&gt;
SwapCached: 1788636 kB&lt;br/&gt;
Active: 122928 kB&lt;br/&gt;
Inactive: 1682064 kB&lt;br/&gt;
HighTotal: 0 kB&lt;br/&gt;
HighFree: 0 kB&lt;br/&gt;
LowTotal: 16442916 kB&lt;br/&gt;
LowFree: 14180204 kB&lt;br/&gt;
SwapTotal: 4225084 kB&lt;br/&gt;
SwapFree: 2213112 kB&lt;br/&gt;
Dirty: 68 kB&lt;br/&gt;
Writeback: 0 kB&lt;br/&gt;
AnonPages: 26628 kB&lt;br/&gt;
Mapped: 2668 kB&lt;br/&gt;
Slab: 86572 kB&lt;br/&gt;
PageTables: 672 kB&lt;br/&gt;
NFS_Unstable: 0 kB&lt;br/&gt;
Bounce: 0 kB&lt;br/&gt;
CommitLimit: 12446540 kB&lt;br/&gt;
Committed_AS: 279580 kB&lt;br/&gt;
VmallocTotal: 34359738367 kB&lt;br/&gt;
VmallocUsed: 80412 kB&lt;br/&gt;
VmallocChunk: 34359657895 kB&lt;br/&gt;
HugePages_Total: 0&lt;br/&gt;
HugePages_Free: 0&lt;br/&gt;
HugePages_Rsvd: 0&lt;br/&gt;
Hugepagesize: 2048 kB&lt;/p&gt;</comment>
                            <comment id="52767" author="adizon" created="Wed, 20 Feb 2013 16:15:18 +0000"  >&lt;p&gt;NOTE: The debug log is to large to attach to this case. Here is a link instead.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.dropbox.com/s/vwuklioioytcl7e/lustre_debug&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://www.dropbox.com/s/vwuklioioytcl7e/lustre_debug&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;</comment>
                            <comment id="52768" author="cliffw" created="Wed, 20 Feb 2013 16:59:10 +0000"  >&lt;p&gt;Thank you, I am engaging further engineering resources now.&lt;/p&gt;</comment>
                            <comment id="52857" author="green" created="Fri, 22 Feb 2013 01:20:49 +0000"  >&lt;p&gt;So, just to reconfirm, when you run this app several times on the same client, it adds 1G more of inactive data, right, so eventually the node will die with OOM?&lt;br/&gt;
If you unmount lustre fs after the run on this client and then mount it back instead of the reboot, is the memory reclaimed?&lt;/p&gt;

&lt;p&gt;I would nto put too much into the leaks you see reported as those are useless unless you take the reading after unmount since every bit of memory allocated but not yet freed because it is in use will show as leaked.&lt;/p&gt;</comment>
                            <comment id="225647" author="adilger" created="Tue, 10 Apr 2018 17:11:07 +0000"  >&lt;p&gt;Closing this old ticket.&lt;/p&gt;

&lt;p&gt;Just because memory is not &quot;Free&quot; doesn&apos;t mean that it is &quot;leaked&quot;.  The kernel will cache pages even if they are unused, until all of the free memory is consumed, and then old data will be freed.&lt;/p&gt;

&lt;p&gt;The main concern would be if the node actually runs out of memory and applications start failing (OOM killer, or &lt;tt&gt;-ENOMEM=-12&lt;/tt&gt; memory allocation errors).&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10030" key="com.atlassian.jira.plugin.system.customfieldtypes:labels">
                        <customfieldname>Epic/Theme</customfieldname>
                        <customfieldvalues>
                                        <label>Performance</label>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzviyv:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>6768</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>