<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:52:45 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-5585] MDS became unresponsive, clients hanging until MDS fail over</title>
                <link>https://jira.whamcloud.com/browse/LU-5585</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This morning some of our clients were hanging (others had not been checked at that time), the active MDS was unresponsive and flooding the console with stack traces. We had to fail over to the second MDS to get the file system back.&lt;/p&gt;

&lt;p&gt;Looking at the system logs, we see a large number of these messages: &lt;br/&gt;
&lt;tt&gt;kernel: socknal_sd00_02: page allocation failure. order:2, mode:0x20&lt;/tt&gt; all followed by many stack traces, full log attached. Our monitoring is showing that the memory was mainly used by buffers but this had been the case for all of last week already and was stable and only slowly increasing. After the restart the memory used by buffers has quickly increase to about 60% and currently seems to be stable about there...&lt;/p&gt;

&lt;p&gt;Just before these page allocation failure messages we noticed a few client reconnect messages, but have not been able to find any network problems so far. Since the restart of the MDT, no unexpected client reconnects have been seen.&lt;/p&gt;

&lt;p&gt;We are running lustre 2.5.2 + 4 patches as recommended in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5529&quot; title=&quot;LBUG when unmounting MDT&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5529&quot;&gt;&lt;del&gt;LU-5529&lt;/del&gt;&lt;/a&gt; and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5514&quot; title=&quot;After upgrade from 1.8.7 to 2.5.2 stack trace cfs_hash_bd_lookup_intent&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5514&quot;&gt;&lt;del&gt;LU-5514&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We&apos;ve been hammering the MDS a bit since the upgrade, both creating files, stating many files/directories from many clients etc and removing many files, but I would still expect the MDS not to fall over like this.&lt;/p&gt;

&lt;p&gt;Is this a problem/memory leak in Lustre or something else? Could it be related different compile options when compiling Lustre? We did compile the version on the MDS in house with these patches and there is always a chance we didn&apos;t quite use the same compile time options that the automatic build process would use...&lt;/p&gt;

&lt;p&gt; What can we do to debug this further and avoid it in the future?&lt;/p&gt;
</description>
                <environment></environment>
        <key id="26311">LU-5585</key>
            <summary>MDS became unresponsive, clients hanging until MDS fail over</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="ferner">Frederik Ferner</reporter>
                        <labels>
                    </labels>
                <created>Thu, 4 Sep 2014 16:50:38 +0000</created>
                <updated>Fri, 12 Aug 2022 21:54:10 +0000</updated>
                            <resolved>Fri, 12 Aug 2022 21:54:10 +0000</resolved>
                                    <version>Lustre 2.5.2</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="93299" author="pjones" created="Fri, 5 Sep 2014 04:59:45 +0000"  >&lt;p&gt;Bobijam&lt;/p&gt;

&lt;p&gt;This ticket is perhaps related to the other just assigned to you. Could you please advise&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="93301" author="adilger" created="Fri, 5 Sep 2014 07:10:24 +0000"  >&lt;p&gt;If you run into this case again, please try to log into the MDS and collect /proc/slabinfo and /proc/meminfo to see where all of the memory is allocated.&lt;/p&gt;

&lt;p&gt;Ideally, you could also enable the allocation debugging (&lt;tt&gt;lctl set_param debug=+alloc&lt;/tt&gt;), increase the maximum debug log size (&lt;tt&gt;lctl set_param debug_mb=200&lt;/tt&gt; and then unmount the MDS to see where it is freeing memory, and dump the debug log.  Unfortunately this may not capture as much debug logging as one might want because it doesn&apos;t have enough memory to store the log itself.&lt;/p&gt;</comment>
                            <comment id="93329" author="ferner" created="Fri, 5 Sep 2014 16:06:07 +0000"  >&lt;p&gt;In this case we were not able to log into the MDS anymore once we noticed the problem, so couldn&apos;t collect these. Equally the serial console was unusable do to the large number of stack traces printed there.&lt;/p&gt;

&lt;p&gt;Would it be worth setting the additional debugging and debug log size now and collect the information in /proc/slabinfo and /proc/meminfo just before unmounting during a maintenance window, i.e. before it re-occurs? Then unmount and collect the debug log? &lt;/p&gt;</comment>
                            <comment id="93344" author="adilger" created="Fri, 5 Sep 2014 17:20:41 +0000"  >&lt;p&gt;Frederik, it would definitely be useful to see what is in /proc/slabinfo and /proc/meminfo when the MDS is running low on memory.  It may be best to just dump this periodically to another system so that it is captured as close to running out of memory as possible if you don&apos;t notice this in advance.&lt;/p&gt;</comment>
                            <comment id="93676" author="ferner" created="Wed, 10 Sep 2014 10:19:19 +0000"  >&lt;p&gt;A quick update. Monitoring the memory usage on the MDS over the last week, we&apos;ve not seen this issue again. See attached memory usage graph, the original issue happened early on Thursday 3, before that buffer memory usage seems to have gone up only, since then the memory/buffer usage has also decreased frequently. &lt;/p&gt;

&lt;p&gt;Even though the memory usage wasn&apos;t that bad, I took the opportunity of a scheduled maintenance yesterday to collect a debug log  just after unmounting the MDT, with malloc added to the debug and debug_mb increased. I&apos;ve also collected meminfo/slabinfo just before unmounting the MDT, these are attached as well in case there is anything useful in there.&lt;/p&gt;</comment>
                            <comment id="93781" author="ferner" created="Thu, 11 Sep 2014 14:08:11 +0000"  >&lt;p&gt;Andreas, All,&lt;/p&gt;

&lt;p&gt;the original issue that all/some clients appear to be hanging on most metadata operations is back, this time the memory on the MDS doesn&apos;t look bad, so that might have been something else. &lt;/p&gt;

&lt;p&gt;The symptoms now that we&apos;re taking more time in debugging it appear to be that many clients appear to be hanging for example when doing &apos;ls -l&apos; on some directories (not the top level directory of the file system). The ls will eventually complete but it takes long enough for  users to phone us and a detailed conversation about what they&apos;re doing, where, us looking into the machine and it still hasn&apos;t completed... This is even for directories with only 2 subdirectories and no files.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;time ls -l /mnt/lustre03/i04
total 8
drwxrwxr-x+ 11 root       dls_sysadmin 4096 Aug 14 10:52 data
drwxrwsr-x+ 10 epics_user i04_data     4096 Aug  2  2011 epics

real	4m48.799s
user	0m0.001s
sys	0m0.001s
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The MDS is reporting a number of these messages &lt;tt&gt;Lustre: lock timed out (enqueued at 1410413746, 300s ago&lt;/tt&gt; along with a few threads completing after 200+s, system load is currently around 70, in top there&apos;s three processes at the top usually in state &apos;D&apos;: flush-253:6,  kcopyd , jbd2/dm-6-8 (see top output below).&lt;/p&gt;

&lt;p&gt;It all seems to have started sometime last night with this &lt;tt&gt;Sep 11 02:26:13 cs04r-sc-mds03-01 kernel: INFO: task mdt_rdpg00_000:12042 blocked for more than 120 seconds.&lt;/tt&gt; nothing in the logs after that until 04:00 but it seems to have goten worse after that.&lt;/p&gt;

&lt;p&gt;I can&apos;t rule out hardware issues on the disk backend but so far have not found any error messages that confirm that.&lt;/p&gt;

&lt;p&gt;Syslog from for the relevant time the MDS will be attached, here&apos;s an extract. There&apos;s nothing Lustre or network related in syslog on the clients or the OSSes.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Sep 11 04:00:10 cs04r-sc-mds03-01 kernel: LNet: Service thread pid 22232 completed after 217.24s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
Sep 11 04:00:10 cs04r-sc-mds03-01 kernel: LNet: Skipped 7 previous similar messages
Sep 11 04:47:43 cs04r-sc-mds03-01 kernel: LustreError: 0:0:(ldlm_lockd.c:344:waiting_locks_callback()) ### lock callback timer expired after 151s: evicting client at 10.144.140.46@o2ib  ns: mdt-lustre03-MDT0000_UUID lock: ffff880d7abef3c0/0x4a9a61dbe320f47a lrc: 3/0,0 mode: PR/PR res: [0x4a40692:0xb304ffb3:0x0].0 bits 0x13 rrc: 3 type: IBT flags: 0x60200000000020 nid: 10.144.140.46@o2ib remote: 0xc6d2a2809bd5a9f1 expref: 84365 pid: 20014 timeout: 4398798140 lvb_type: 0
Sep 11 04:47:43 cs04r-sc-mds03-01 kernel: LustreError: 28875:0:(client.c:1079:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff8809f7420400 x1478844569897188/t0(0) o104-&amp;gt;lustre03-MDT0000@10.144.140.46@o2ib:15/16 lens 296/224 e 0 to 0 dl 0 ref 1 fl Rpc:N/0/ffffffff rc 0/-1
Sep 11 04:47:43 cs04r-sc-mds03-01 kernel: LustreError: 28875:0:(ldlm_lockd.c:662:ldlm_handle_ast_error()) ### client (nid 10.144.140.46@o2ib) returned 0 from blocking AST ns: mdt-lustre03-MDT0000_UUID lock: ffff880168665880/0x4a9a61dbe320f9dd lrc: 1/0,0 mode: --/CR res: [0x4a40695:0xb304ffb6:0x0].0 bits 0x5 rrc: 2 type: IBT flags: 0x64a01000000020 nid: 10.144.140.46@o2ib remote: 0xc6d2a2809bd5aa06 expref: 60513 pid: 12032 timeout: 4398949080 lvb_type: 0
Sep 11 04:49:10 cs04r-sc-mds03-01 kernel: Lustre: lustre03-MDT0000: Client b4d423ad-3219-f806-0fd2-5a2845b5faad (at 10.144.140.46@o2ib) reconnecting
Sep 11 04:49:10 cs04r-sc-mds03-01 kernel: Lustre: Skipped 67 previous similar messages
Sep 11 05:08:41 cs04r-sc-mds03-01 kernel: LNet: Service thread pid 12048 completed after 321.32s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
Sep 11 05:08:41 cs04r-sc-mds03-01 kernel: LNet: Skipped 22 previous similar messages
Sep 11 06:09:12 cs04r-sc-mds03-01 kernel: LNet: Service thread pid 12036 completed after 329.22s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).
Sep 11 06:09:12 cs04r-sc-mds03-01 kernel: LNet: Skipped 1 previous similar message
Sep 11 06:40:46 cs04r-sc-mds03-01 kernel: Lustre: lock timed out (enqueued at 1410413746, 300s ago)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We don&apos;t know what is triggering this, but at the moment we&apos;re still running a few jobs both scanning the file systems, copying data away and deleting files in addition to our normal user jobs, so I would expect the MDT to be a bit busier but not that bad.&lt;/p&gt;</comment>
                            <comment id="93782" author="ferner" created="Thu, 11 Sep 2014 14:10:58 +0000"  >&lt;p&gt;MDT log from the time after failing over the MDT to this machine until now.&lt;/p&gt;</comment>
                            <comment id="93784" author="ferner" created="Thu, 11 Sep 2014 14:12:51 +0000"  >&lt;p&gt;two dump files collected while I suspect the problem was ongoing. manual_dump.txt has been collected while I was experiencing the problem.&lt;/p&gt;</comment>
                            <comment id="93789" author="ferner" created="Thu, 11 Sep 2014 14:53:26 +0000"  >&lt;p&gt;Ah, I think I may now have fixed this immediate problem.&lt;/p&gt;

&lt;p&gt;We created a LVM snapshot just before extending the file system earlier this week. We had kept this snapshot around and wanted to keep it for a little longer while we were performing tests on the extended file system. However, searching for information on the kcopyd process, I came across a post to dm-devel about the performance impact of kcopyd.  Even though the post was from May 2007, we decided to remove the snapshot and load average immediately started to drop and is now down at around 8, client metadata performance has also recovered, nicely noticeable as jump up for file open rates on the MDT at the time of disabling the snapshot...&lt;/p&gt;

&lt;p&gt;I guess the lesson is that snapshots can still have a very large performance impact.&lt;/p&gt;</comment>
                            <comment id="95558" author="ferner" created="Thu, 2 Oct 2014 17:53:33 +0000"  >&lt;p&gt;This might have come back, this time no LVM snapshot involved. &lt;/p&gt;

&lt;p&gt;Over the last few days buffer memory usage has steadily gone up again, without going down as it used to, currently about 88% of the memory appears to be buffers and has been for over a day now. So far clients don&apos;t appear to be affected.&lt;/p&gt;

&lt;p&gt;Most recent /proc/meminfo and /proc/slabinfo are attached. Below are the first few lines of slabtop output as well.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[bnh65367@cs04r-sc-mds03-01 ~]$ slabtop -o | head -20
 Active / Total Objects (% used)    : 40400048 / 44245203 (91.3%)
 Active / Total Slabs (% used)      : 2464035 / 2464417 (100.0%)
 Active / Total Caches (% used)     : 137 / 253 (54.2%)
 Active / Total Size (% used)       : 8144191.17K / 9226241.88K (88.3%)
 Minimum / Average / Maximum Object : 0.02K / 0.21K / 4096.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
29618648 28952513  97%    0.10K 800504       37   3202016K buffer_head            
6944364 5580655  80%    0.50K 992052        7   3968208K ldlm_locks             
3200268 2900338  90%    0.31K 266689       12   1066756K ldlm_resources         
1036350 444641  42%    0.12K  34545       30    138180K size-128               
904113 851578  94%    0.55K 129159        7    516636K radix_tree_node        
574240 331937  57%    0.19K  28712       20    114848K size-192               
360192 166594  46%    0.08K   7504       48     30016K mdd_obj                
311202 166140  53%    0.11K   9153       34     36612K lod_obj                
282840 165746  58%    0.25K  18856       15     75424K mdt_obj                
260792 257126  98%    0.50K  32599        8    130396K size-512               
202257 190027  93%    1.02K  67419        3    269676K ldiskfs_inode_cache    
175761  83015  47%    0.06K   2979       59     11916K size-64                
 78288  62865  80%    0.03K    699      112      2796K size-32                
[bnh65367@cs04r-sc-mds03-01 ~]$ 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="95609" author="adilger" created="Fri, 3 Oct 2014 00:51:23 +0000"  >&lt;p&gt;What is a bit odd here is that there are 5.5M in-use ldlm_locks on 2.9M ldlm_resources, yet there are only 190K inodes in memory (166K objects).  This implies there is something kind of strange happening in the DLM, since there should only be a single resource per MDT object.  There should be at least one ldlm_resource for each ldlm_lock, though having more locks than resources is OK as multiple clients may lock the same resource, or a single client may lock different parts of the same resource.&lt;/p&gt;

&lt;p&gt;One experiment you might do is to run &lt;tt&gt;lctl get_param ldlm.namespaces.&lt;b&gt;MDT&lt;/b&gt;.lru_size&lt;/tt&gt; to get the count of locks held by all the clients, and then &lt;tt&gt;lctl set_param ldlm.namespaces.&lt;b&gt;MDT&lt;/b&gt;.lru_size=clear&lt;/tt&gt; on the clients to drop all their DLM locks.  The set_param will cancel the corresponding locks on the server and flush the client metadata cache as a result, which may have a short term negative impact on metadata performance, in case that is unacceptable.&lt;/p&gt;

&lt;p&gt;The cancellation of locks on the clients should result in all of the &lt;tt&gt;ldlm_locks&lt;/tt&gt; structures being freed on the MDS (or at least the sum of the locks on the clients should match the number of ACTIVE ldlm_locks allocated on the MDS). If that isn&apos;t the case, it seems we have some kind of leak in the DLM.&lt;/p&gt;</comment>
                            <comment id="95621" author="ferner" created="Fri, 3 Oct 2014 13:21:17 +0000"  >&lt;p&gt;I&apos;ve done that, unfortunately it didn&apos;t seem to free up much memory.&lt;/p&gt;

&lt;p&gt;During the initial sweep of &lt;tt&gt;lctl get_param ldlm.namespaces.&lt;b&gt;MDT&lt;/b&gt;.lru_size&lt;/tt&gt; for this file system, adding up the numbers for all reachable clients (a few are currently unresponsive and are being looked at, we assume unrelated), we seem to have about 5.3M locks on clients (corresponding to most recent snapshot of slabinfo of 5.4M ldlm_locks).&lt;/p&gt;

&lt;p&gt;After the lru_size=clear, both numbers dropped, now, about 20 minutes later they are back at about 1.5M  each.&lt;/p&gt;

&lt;p&gt;Fresh meminfo/slabinfo about 20 minutes after clearing the locks are attached.&lt;/p&gt;</comment>
                            <comment id="95651" author="ferner" created="Fri, 3 Oct 2014 17:27:27 +0000"  >&lt;p&gt;Unfortunately this started to severely affect file system performance so we had to fail over. I was nearly in time to do a clean unmount but not quite. By the time I started typing the umount command, the MDS froze completely and I was not able to collect any debug_log.&lt;/p&gt;

&lt;p&gt;Since this is now a repeating feature of this file system any idea how we could prevent this from re-occuring would be much appreciated. If there is anything we can do do help debug this, let us know, we&apos;ll do what we can.&lt;/p&gt;

&lt;p&gt;Frederik&lt;/p&gt;</comment>
                            <comment id="95889" author="green" created="Tue, 7 Oct 2014 23:23:49 +0000"  >&lt;p&gt;Do you have many clients on this system?&lt;/p&gt;

&lt;p&gt;It&apos;s been a known problem in the past that if you let cleint lrus to grow uncontrollably, servers become somewhat memory starved.&lt;/p&gt;

&lt;p&gt;One possible workaround is to set lru_size on the clients to something conservative like 100 or 200. Also if you have mostly non-intersecting jobs on the clients that don&apos;t reuse same files between different jobs, some sites are dropping lock lrus (and other caches) forcefully in between job runs.&lt;/p&gt;</comment>
                            <comment id="96193" author="pjones" created="Sun, 12 Oct 2014 17:25:57 +0000"  >&lt;p&gt;Bobijam&lt;/p&gt;

&lt;p&gt;Could this be related to the issue reported in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5727&quot; title=&quot;MDS OOMs with 2.5.3 clients and lru_size != 0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5727&quot;&gt;&lt;del&gt;LU-5727&lt;/del&gt;&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="96232" author="ferner" created="Mon, 13 Oct 2014 17:12:28 +0000"  >&lt;p&gt;Depends on your view, we&apos;ve got just under 300 clients on this file system. &lt;/p&gt;

&lt;p&gt;We&apos;ll try limiting the lru_size and will continue to monitor, looking at &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5727&quot; title=&quot;MDS OOMs with 2.5.3 clients and lru_size != 0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5727&quot;&gt;&lt;del&gt;LU-5727&lt;/del&gt;&lt;/a&gt;, I&apos;m not sure how much this will give us.&lt;/p&gt;

&lt;p&gt;Considering that we have been cleaning the file system, it is also entirely possible that we hit something similar to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-5726&quot; title=&quot;MDS buffer not freed when deleting files&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-5726&quot;&gt;&lt;del&gt;LU-5726&lt;/del&gt;&lt;/a&gt;, i.e. we almost certainly have run &apos;rm -rf&apos; or similar in parallel on multiple clients. I will try to reproduce this tomorrow.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="26304">LU-5583</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="26411">LU-5595</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="15727" name="cs04r-sc-mds03-01-logs-20140911.txt" size="37951" author="ferner" created="Thu, 11 Sep 2014 14:10:58 +0000"/>
                            <attachment id="15721" name="cs04r-sc-mds03-01-lustre-dk_after_umount.xz" size="8611392" author="ferner" created="Wed, 10 Sep 2014 10:20:25 +0000"/>
                            <attachment id="15719" name="cs04r-sc-mds03-01-meminfo-20140909-1705.txt" size="1200" author="ferner" created="Wed, 10 Sep 2014 10:20:25 +0000"/>
                            <attachment id="15867" name="cs04r-sc-mds03-01-meminfo-20141002-1841" size="1201" author="ferner" created="Thu, 2 Oct 2014 17:53:33 +0000"/>
                            <attachment id="15882" name="cs04r-sc-mds03-01-meminfo-20141003-1412" size="1201" author="ferner" created="Fri, 3 Oct 2014 13:21:17 +0000"/>
                            <attachment id="15718" name="cs04r-sc-mds03-01-memory.png" size="27422" author="ferner" created="Wed, 10 Sep 2014 10:20:25 +0000"/>
                            <attachment id="15636" name="cs04r-sc-mds03-01-messages.txt.xz" size="50900" author="ferner" created="Thu, 4 Sep 2014 16:50:38 +0000"/>
                            <attachment id="15720" name="cs04r-sc-mds03-01-slabinfo-20140909-1705.txt" size="27497" author="ferner" created="Wed, 10 Sep 2014 10:20:25 +0000"/>
                            <attachment id="15868" name="cs04r-sc-mds03-01-slabinfo-20141002-1841" size="27386" author="ferner" created="Thu, 2 Oct 2014 17:53:33 +0000"/>
                            <attachment id="15881" name="cs04r-sc-mds03-01-slabinfo-20141003-1412" size="27385" author="ferner" created="Fri, 3 Oct 2014 13:21:17 +0000"/>
                            <attachment id="15729" name="lustre-log.1410414046.22423.xz" size="6560164" author="ferner" created="Thu, 11 Sep 2014 14:12:51 +0000"/>
                            <attachment id="15728" name="manual_dump.txt.xz" size="8008280" author="ferner" created="Thu, 11 Sep 2014 14:12:51 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 13 Oct 2014 16:50:38 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwvcn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>15580</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Thu, 4 Sep 2014 16:50:38 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>