<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:46:37 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4875] We have 2 OSS server in HA and two MDS in HA , On each OSS 12 OSTs are mounted per OSS with faiover. OSS servers get reboots while working</title>
                <link>https://jira.whamcloud.com/browse/LU-4875</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;We have 2 OSS servers in HA with corosync. Each OSS has 12 OSTs mounted in failover. While working intermittently OSS server get reboots frequently, This is affecting the availability of file system badly.&lt;/p&gt;</description>
                <environment></environment>
        <key id="24141">LU-4875</key>
            <summary>We have 2 OSS server in HA and two MDS in HA , On each OSS 12 OSTs are mounted per OSS with faiover. OSS servers get reboots while working</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="4">Incomplete</resolution>
                                        <assignee username="jfc">John Fuchs-Chesney</assignee>
                                    <reporter username="psharma">Pankaj Sharma</reporter>
                        <labels>
                    </labels>
                <created>Wed, 9 Apr 2014 17:05:30 +0000</created>
                <updated>Wed, 29 Nov 2017 21:15:15 +0000</updated>
                            <resolved>Tue, 19 May 2015 00:33:10 +0000</resolved>
                                    <version>Lustre 2.2.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="81306" author="psharma" created="Wed, 9 Apr 2014 17:12:22 +0000"  >&lt;p&gt;SOS report from system which contains all configuration and log files can be downloaded from &lt;a href=&quot;https://ftp.usa.hp.com/hprc&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://ftp.usa.hp.com/hprc&lt;/a&gt; or ftp.usa.hp.com . Username is lustre and password is 7ui$HY6x.&lt;/p&gt;</comment>
                            <comment id="81309" author="psharma" created="Wed, 9 Apr 2014 17:18:40 +0000"  >&lt;p&gt;var/log/messages from both OSS are attached&lt;/p&gt;</comment>
                            <comment id="81310" author="psharma" created="Wed, 9 Apr 2014 17:19:43 +0000"  >&lt;p&gt;dmesg from both OSS servers are attached&lt;/p&gt;</comment>
                            <comment id="81333" author="psharma" created="Wed, 9 Apr 2014 22:21:05 +0000"  >&lt;p&gt;ost.threads_max counts are set to 256 and ost_io.threads_max are set to 256 as well .monitor timeout is 60 sec and dc-deadtime is 30 sec . Thanks&lt;/p&gt;</comment>
                            <comment id="81337" author="adilger" created="Thu, 10 Apr 2014 05:28:45 +0000"  >&lt;p&gt;I don&apos;t know what .xz files are, so I cannot look at them. The dmesg and messages files do not list how much RAM is on these nodes, nor what type of RAID you are using.  Is it MD software RAID?&lt;/p&gt;

&lt;p&gt;My first guess would be that with 12 very large OSTs (I see 180 disks) on the node that it is just running out of memory. &lt;/p&gt;</comment>
                            <comment id="81345" author="psharma" created="Thu, 10 Apr 2014 10:56:56 +0000"  >&lt;p&gt;Thanks Andreas for prompt reply.&lt;br/&gt;
Each OSS server has 32 GB RAM.&lt;br/&gt;
We are using hardware RAID 5. Each OST consists of 11 x 300 GB SAS Disks in RAID 5. We have such 12 OSTs on each OSS. &lt;br/&gt;
Yes probably you may be right that they may be running out of memory but how can we make sure of that, is there anything in logs or we can monitor in Lustre thru some debug option that it is running out of memory. If you need any other log then do let me know.&lt;br/&gt;
I will extract .xz files and upload HA configuration files as .tar/.zip&lt;/p&gt;</comment>
                            <comment id="81346" author="psharma" created="Thu, 10 Apr 2014 10:59:03 +0000"  >&lt;p&gt;Do we have any parameters in Lustre thru which we can restrict running out of memory. As we have already reduced ost_io.threads_max to 256&lt;/p&gt;</comment>
                            <comment id="81347" author="psharma" created="Thu, 10 Apr 2014 11:04:59 +0000"  >&lt;p&gt;HA failover configuration file&lt;/p&gt;</comment>
                            <comment id="81351" author="psharma" created="Thu, 10 Apr 2014 11:50:50 +0000"  >&lt;p&gt;sar file from OSS1 for last 3 days are uploaded which can give us some idea for cpu utilization, I/O wait etc.&lt;/p&gt;</comment>
                            <comment id="81352" author="psharma" created="Thu, 10 Apr 2014 12:27:28 +0000"  >&lt;p&gt;we have noticed following in var/log/messages &quot; max_child_count  reached, postponing execution of operation monitor on ocf::Filesystem &quot;&lt;/p&gt;

&lt;p&gt;Do this has some relation with reboot, if yes then what exactly this means&lt;/p&gt;
</comment>
                            <comment id="81353" author="psharma" created="Thu, 10 Apr 2014 12:30:48 +0000"  >&lt;p&gt;HA configuration file - corosync.conf.txt and oos1-cibxml.txt are uploaded&lt;/p&gt;</comment>
                            <comment id="81356" author="adilger" created="Thu, 10 Apr 2014 13:06:37 +0000"  >&lt;p&gt;You should collect the messages from the console, which is best done by connecting via serial port to the servers. That will hopefully tell you exactly what is going wrong at the time of failure. &lt;/p&gt;

&lt;p&gt;What is the size of the journal on each OST? &lt;/p&gt;</comment>
                            <comment id="81357" author="psharma" created="Thu, 10 Apr 2014 13:30:05 +0000"  >&lt;p&gt;uploaded the last 2 days sar file from OSS2 server&lt;/p&gt;</comment>
                            <comment id="81358" author="psharma" created="Thu, 10 Apr 2014 13:30:55 +0000"  >&lt;p&gt;Please find OST detail below&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@homeoss1 ~&amp;#93;&lt;/span&gt;# tune2fs -l /dev/mapper/mpathg&lt;br/&gt;
tune2fs 1.42.7.wc2 (07-Nov-2013)&lt;br/&gt;
device /dev/dm-6 mounted by lustre per /proc/fs/lustre/obdfilter/home-OST0006/mntdev&lt;br/&gt;
Filesystem volume name:   home-OST0006&lt;br/&gt;
Last mounted on:          /&lt;br/&gt;
Filesystem UUID:          5a3ea3b2-568e-4062-a13b-ec5f121c0bd1&lt;br/&gt;
Filesystem magic number:  0xEF53&lt;br/&gt;
Filesystem revision #:    1 (dynamic)&lt;br/&gt;
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent mmp flex_bg sparse_super large_file huge_file uninit_bg dir_nlink&lt;br/&gt;
Filesystem flags:         signed_directory_hash&lt;br/&gt;
Default mount options:    user_xattr acl&lt;br/&gt;
Filesystem state:         clean&lt;br/&gt;
Errors behavior:          Continue&lt;br/&gt;
Filesystem OS type:       Linux&lt;br/&gt;
Inode count:              11435008&lt;br/&gt;
Block count:              731811520&lt;br/&gt;
Reserved block count:     36590576&lt;br/&gt;
Free blocks:              583168103&lt;br/&gt;
Free inodes:              11013734&lt;br/&gt;
First block:              0&lt;br/&gt;
Block size:               4096&lt;br/&gt;
Fragment size:            4096&lt;br/&gt;
Reserved GDT blocks:      848&lt;br/&gt;
Blocks per group:         32768&lt;br/&gt;
Fragments per group:      32768&lt;br/&gt;
Inodes per group:         512&lt;br/&gt;
Inode blocks per group:   32&lt;br/&gt;
RAID stripe width:        256&lt;br/&gt;
Flex block group size:    256&lt;br/&gt;
Filesystem created:       Fri Mar 28 01:16:44 2014&lt;br/&gt;
Last mount time:          Wed Apr  9 16:22:44 2014&lt;br/&gt;
Last write time:          Wed Apr  9 16:22:44 2014&lt;br/&gt;
Mount count:              51&lt;br/&gt;
Maximum mount count:      -1&lt;br/&gt;
Last checked:             Fri Mar 28 01:16:44 2014&lt;br/&gt;
Check interval:           0 (&amp;lt;none&amp;gt;)&lt;br/&gt;
Lifetime writes:          593 GB&lt;br/&gt;
Reserved blocks uid:      0 (user root)&lt;br/&gt;
Reserved blocks gid:      0 (group root)&lt;br/&gt;
First inode:              11&lt;br/&gt;
Inode size:               256&lt;br/&gt;
Required extra isize:     28&lt;br/&gt;
Desired extra isize:      28&lt;br/&gt;
Journal inode:            8&lt;br/&gt;
Default directory hash:   half_md4&lt;br/&gt;
Directory Hash Seed:      5cd731f7-67c3-4db6-9e7c-21db7e829749&lt;br/&gt;
Journal backup:           inode blocks&lt;br/&gt;
MMP block number:         9734&lt;br/&gt;
MMP update interval:      5&lt;/p&gt;</comment>
                            <comment id="81365" author="psharma" created="Thu, 10 Apr 2014 13:59:57 +0000"  >&lt;p&gt;we have not given any specific journal size while formatting, its default, have shared above OST info from which if you can make out.&lt;/p&gt;</comment>
                            <comment id="81415" author="psharma" created="Fri, 11 Apr 2014 11:24:34 +0000"  >&lt;p&gt;Hi Andreas, any update further on the logs and inputs we provided. Pls help we are in difficult position here.&lt;/p&gt;

&lt;p&gt;Regards,&lt;/p&gt;

&lt;p&gt;Pankaj&lt;/p&gt;</comment>
                            <comment id="81418" author="adilger" created="Fri, 11 Apr 2014 13:01:55 +0000"  >&lt;p&gt;The above information does not include the journal size, but if you used the default then it is 400MB and 12 journals are 4800MB and not enough use up the 32GB of RAM as I thought. &lt;/p&gt;

&lt;p&gt;You really need to post the serial console logs from the time of a crash. /var/log/messages and dmesg from a running system do not contain the information from the time of failure. &lt;/p&gt;</comment>
                            <comment id="81587" author="psharma" created="Tue, 15 Apr 2014 09:00:38 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;At reboot following error appears at console &lt;br/&gt;
&quot;Message from syslogd@homeoss1 at Apr 15 12:46:34 ...&lt;br/&gt;
kernel:LustreError: 7571:0:(ost_handler.c:1689:ost_prolong_lock_one()) ASSERTION( lock-&amp;gt;l_req_mode == lock-&amp;gt;l_granted_mode ) failed:&lt;/p&gt;

&lt;p&gt;Message from syslogd@homeoss1 at Apr 15 12:46:34 ...&lt;br/&gt;
kernel:LustreError: 7571:0:(ost_handler.c:1689:ost_prolong_lock_one()) LBUG&lt;/p&gt;

&lt;p&gt;Message from syslogd@homeoss1 at Apr 15 12:46:34 ...&lt;br/&gt;
kernel:Kernel panic - not syncing: LBUG&lt;br/&gt;
&quot;&lt;/p&gt;
</comment>
                            <comment id="81588" author="psharma" created="Tue, 15 Apr 2014 09:02:36 +0000"  >&lt;p&gt;Also just to reduce the load we were trying to reduce the thread count on OSS, but while setting thread count to 128 on OSS2 we are getting following error&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@homeoss2 ~&amp;#93;&lt;/span&gt;# lctl set_param ost.OSS.ost.threads_max=128&lt;br/&gt;
ost.OSS.ost.threads_max=128&lt;br/&gt;
error: set_param: writing to file /proc/fs/lustre/ost/OSS/ost/threads_max: Numerical result out of range &lt;/p&gt;</comment>
                            <comment id="81589" author="psharma" created="Tue, 15 Apr 2014 09:10:11 +0000"  >&lt;p&gt;Also during last reboot on 12 April 14, we have observed following error messages, I f we can relate something with this&lt;/p&gt;

&lt;p&gt; Apr 12 02:03:55 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1397248434/real 1397248435&amp;#93;&lt;/span&gt;  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248441 ref 2 fl Rpc:X/0/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:03:55 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: [sent 1397248435/real &lt;/p&gt;

&lt;p&gt;Apr 12 02:03:57 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1397248437/real 1397248437&amp;#93;&lt;/span&gt;  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248444 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:03:57 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) Skipped 73468 previous similar messages&lt;br/&gt;
Apr 12 02:03:59 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1397248439/real 1397248439&amp;#93;&lt;/span&gt;  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248446 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:03:59 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) Skipped 162386 previous similar messages&lt;br/&gt;
Apr 12 02:04:03 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1397248443/real 1397248443&amp;#93;&lt;/span&gt;  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248450 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:04:03 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) Skipped 296701 previous similar messages&lt;/p&gt;


&lt;p&gt;1397248467]  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248474 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:04:27 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) Skipped 1301167 previous similar messages&lt;br/&gt;
Apr 12 02:04:59 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: &lt;span class=&quot;error&quot;&gt;&amp;#91;sent 1397248499/real 1397248499&amp;#93;&lt;/span&gt;  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248506 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:04:59 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) Skipped 2602376 previous similar messages&lt;br/&gt;
Apr 12 02:06:03 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) @@@ Request  sent has failed due to network error: [sent 1397248563/real &lt;/p&gt;

&lt;p&gt;1397248563]  req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248570 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:06:03 homeoss1 kernel: Lustre: 7390:0:(client.c:1788:ptlrpc_expire_one_request()) Skipped 5199842 previous similar messages&lt;br/&gt;
Apr 12 02:07:14 homeoss1 kernel: Lustre: Service thread pid 7390 was inactive for 200.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:&lt;/p&gt;


&lt;p&gt;Apr 12 02:07:14 homeoss1 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c140&amp;gt;&amp;#93;&lt;/span&gt; ? child_rip+0x0/0x20&lt;br/&gt;
Apr 12 02:07:14 homeoss1 kernel:&lt;br/&gt;
Apr 12 02:07:14 homeoss1 kernel: LustreError: dumping log to /tmp/lustre-log.1397248634.7390&lt;br/&gt;
Apr 12 02:07:25 homeoss1 kernel: Lustre: home-OST0003: haven&apos;t heard from client c0bee620-b606-8adf-dadd-0d895330a3fd (at 10.2.1.252@o2ib) in 227 seconds. I think it&apos;s dead, and I am evicting it. exp ffff88090ff17000, cur 1397248645 expire 1397248495 last 1397248418&lt;br/&gt;
Apr 12 02:07:25 homeoss1 kernel: LustreError: 7390:0:(client.c:1060:ptlrpc_import_delay_req()) @@@ IMP_CLOSED   req@ffff8804e2e86c00 x1464903367127640/t0(0) o106-&amp;gt;home-OST0005@10.2.1.252@o2ib:15/16 lens 296/232 e 0 to 1 dl 1397248652 ref 2 fl Rpc:X/2/ffffffff rc 0/-1&lt;br/&gt;
Apr 12 02:07:25 homeoss1 kernel: LustreError: 138-a: home-OST0005: A client on nid 10.2.1.252@o2ib was evicted due to a lock glimpse callback time out: rc -4&lt;br/&gt;
Apr 12 02:07:25 homeoss1 kernel: Lustre: Service thread pid 7390 completed after 210.90s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).&lt;/p&gt;


&lt;p&gt;Apr 12 02:28:07 homeoss1 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;5195&amp;#93;&lt;/span&gt;: notice: run_graph: Transition 256 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-253.bz2): Complete&lt;br/&gt;
Apr 12 02:28:07 homeoss1 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;5195&amp;#93;&lt;/span&gt;: info: te_graph_trigger: Transition 256 is now complete&lt;br/&gt;
Apr 12 02:28:07 homeoss1 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;5195&amp;#93;&lt;/span&gt;: info: notify_crmd: Transition 256 status: done - &amp;lt;null&amp;gt;&lt;br/&gt;
Apr 12 02:28:07 homeoss1 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;5195&amp;#93;&lt;/span&gt;: info: do_state_transition: State transition S_TRANSITION_ENGINE -&amp;gt; S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] &lt;br/&gt;
Apr 12 02:28:07 homeoss1 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;5195&amp;#93;&lt;/span&gt;: info: do_state_transition: Starting PEngine Recheck Timer&lt;br/&gt;
Apr 12 02:31:51 homeoss1 cib: &lt;span class=&quot;error&quot;&gt;&amp;#91;5191&amp;#93;&lt;/span&gt;: info: cib_stats: Processed 1 operations (10000.00us average, 0% utilization) in the last 10min&lt;br/&gt;
Apr 12 02:38:25 homeoss1 kernel: imklog 4.6.2, log source = /proc/kmsg started.&lt;br/&gt;
Apr 12 02:38:25 homeoss1 rsyslogd: &lt;span class=&quot;error&quot;&gt;&amp;#91;origin software=&amp;quot;rsyslogd&amp;quot; swVersion=&amp;quot;4.6.2&amp;quot; x-pid=&amp;quot;7609&amp;quot; x-info=&amp;quot;http://www.rsyslog.com&amp;quot;&amp;#93;&lt;/span&gt; (re)start&lt;/p&gt;
</comment>
                            <comment id="81590" author="psharma" created="Tue, 15 Apr 2014 09:11:09 +0000"  >&lt;p&gt;Uploaded log messages from OSS1 shows overloaded and hung threads as well. Can this be cause of reboot.&lt;/p&gt;</comment>
                            <comment id="81591" author="psharma" created="Tue, 15 Apr 2014 09:14:13 +0000"  >&lt;p&gt;As per the logs of 12 April , system rebooted at 2:28:07 prior to this at 02:07 it shows overload but what exactly this mean we don&apos;t know as CPU utilization was not high.&lt;/p&gt;</comment>
                            <comment id="81592" author="psharma" created="Tue, 15 Apr 2014 09:14:58 +0000"  >&lt;p&gt;Pls have a look at above facts and help us &lt;/p&gt;</comment>
                            <comment id="81707" author="psharma" created="Wed, 16 Apr 2014 06:00:27 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;Please update.&lt;/p&gt;

&lt;p&gt;Regards,&lt;/p&gt;

&lt;p&gt;Pankaj&lt;/p&gt;</comment>
                            <comment id="81801" author="atulvid" created="Thu, 17 Apr 2014 06:01:10 +0000"  >&lt;p&gt;Just wanted to clarify that this is NOT DDN+Intel supported Lustre and neither it is on DDN hardware. IIT Kanpur is DDN+Intel supported customer but for another system they procured. Not this one. &lt;/p&gt;</comment>
                            <comment id="94409" author="jfc" created="Thu, 18 Sep 2014 16:09:22 +0000"  >&lt;p&gt;I will watch this.&lt;br/&gt;
~ jfc.&lt;/p&gt;</comment>
                            <comment id="115760" author="jfc" created="Tue, 19 May 2015 00:33:10 +0000"  >&lt;p&gt;I&apos;m marking this as resolved/incomplete because we have not been able to get sufficient appropriate information to debug the problem any further and the ticket is now more than six months old.&lt;/p&gt;

&lt;p&gt;~ jfc.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwjpb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>13484</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>