<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:38:14 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3940] Clients in recovery for a very long time - LustreError: (ldlm_lib.c:941:target_handle_connect())</title>
                <link>https://jira.whamcloud.com/browse/LU-3940</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Dear support,&lt;/p&gt;

&lt;p&gt;We encountered a serious problem with a production lustre installation where customers experience is negatively affected.&lt;br/&gt;
More specifically, some OSTs became offline following a node cluster fault and these volumes tend to remain in recovery status forever, denying connection from new clients (XXX clients in recovery for 18446744073709550863s).&lt;br/&gt;
A practical solution we found requires a manual abort of the recovery procedure to properly mount the volumes.&lt;br/&gt;
We need to understand whether this situation is due to a bug or a configuration problem since we have not been able to retrieve useful information on the bug tracking system.&lt;br/&gt;
Below we attach a short analysis of the problem and some extract from /var/log/messages file.&lt;/p&gt;

&lt;p&gt;Thanks in advance for your support.&lt;/p&gt;


&lt;p&gt;Kernel Version installed on the OSS:&lt;/p&gt;

&lt;p&gt;Linux version 2.6.32-220.4.2.el6_lustre.x86_64 (jenkins@client-31.lab.whamcloud.com) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Wed Mar 14 13:03:47 PDT 2012&lt;/p&gt;

&lt;p&gt;Lustre version installed on the OSS and MGS/MDS:&lt;br/&gt;
Build Version: 2.2.0-RC2--PRISTINE-2.6.32-220.4.2.el6_lustre.x86_64&lt;/p&gt;

&lt;p&gt;Lustre version installed on the clients:&lt;br/&gt;
There is a mix of versions (we can be more precise if needed)&lt;br/&gt;
Build Version: 2.2.0-RC2--PRISTINE-2.6.32-220.4.2.el6_lustre.x86_64&lt;br/&gt;
Build Version: 2.4.0-RC2--CHANGED-2.6.32-358.6.2.el6.x86_64&lt;/p&gt;

&lt;p&gt;Processors:&lt;br/&gt;
2xIntel(R) Xeon(R) CPU E5645  @ 2.40GHz                                                                                     &lt;br/&gt;
Ram:&lt;br/&gt;
48GB&lt;/p&gt;

&lt;p&gt;Infrastructure info:&lt;br/&gt;
2 node for MDS/MGS with pacemaker/corosync cluster&lt;br/&gt;
8 node for OSS in a 2 node cluster configuration with pacemaker/corosync cluster with a dothill storage controller per pair.&lt;br/&gt;
~1000 client&lt;/p&gt;

&lt;p&gt;Problem description:&lt;/p&gt;

&lt;p&gt;The problem described below was found on a two node cluster n-oss07 and n-oss08.&lt;br/&gt;
Storage configuration:&lt;/p&gt;

&lt;p&gt;n-oss07:&lt;br/&gt;
    nero-OST0006 -&amp;gt; vd01&lt;br/&gt;
    nero-OST000e -&amp;gt; vd03&lt;br/&gt;
    nero-OST0016 -&amp;gt; vd05&lt;br/&gt;
    nero-OST001e -&amp;gt; vd07&lt;/p&gt;

&lt;p&gt;n-oss08:&lt;br/&gt;
    nero-OST0007 -&amp;gt; vd02&lt;br/&gt;
    nero-OST000f -&amp;gt; vd04&lt;br/&gt;
    nero-OST0017 -&amp;gt; vd02&lt;br/&gt;
    nero-OST001f -&amp;gt; vd04&lt;/p&gt;

&lt;p&gt;On 27.08 at about 11:24 the node n-oss07 hang and was shutdown by the node n-oss08 that successfully took over the the n-oss07 volumes putting the OSTs in recovery mode, particularly the OST nero-OST001e with a very long recovery time.&lt;br/&gt;
There was plenty of log like the following extract:&lt;/p&gt;

&lt;p&gt;-------------------------------------------------------------------------&lt;br/&gt;
Aug 27 11:34:25 n-oss08 kernel: LustreError: 7913:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (eb0a36de-8fcf-cfb5-e996-eb6968148594): 329 clients in recovery for 18446744073709551490s&lt;br/&gt;
-------------------------------------------------------------------------&lt;/p&gt;

&lt;p&gt;When the node n-oss07 came back online, it took back its resources forcing the unmount thus hanging the node n-oss08.&lt;/p&gt;

&lt;p&gt;-------------------------------------------------------------------------&lt;br/&gt;
Aug 27 11:41:21 n-oss08 Filesystem&lt;span class=&quot;error&quot;&gt;&amp;#91;18812&amp;#93;&lt;/span&gt;: INFO: Running stop for /dev/mapper/vd07 on /lustre/vd07&lt;br/&gt;
Aug 27 11:41:21 n-oss08 Filesystem&lt;span class=&quot;error&quot;&gt;&amp;#91;18812&amp;#93;&lt;/span&gt;: INFO: Trying to unmount /lustre/vd07&lt;br/&gt;
Aug 27 11:41:21 n-oss08 kernel: Lustre: Failing over nero-OST001e&lt;br/&gt;
Aug 27 11:41:21 n-oss08 kernel: LustreError: 18866:0 (ldlm_lib.c:1978:target_stop_recovery_thread()) nero-OST001e: Aborting recovery&lt;br/&gt;
Aug 27 11:41:40 n-oss08 kernel: LustreError: 137-5: UUID &apos;nero-OST001e_UUID&apos; is not available for connect (stopping)&lt;br/&gt;
Aug 27 11:41:40 n-oss08 kernel: LustreError: 4278:0:(ldlm_lib.c:2239:target_send_reply_msg()) @@@&lt;br/&gt;
processing error (&lt;del&gt;19) req@ffff8805a769d800 x1443813942539660/t0(0) o8&lt;/del&gt;&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 400/0 e 0 to 0 dl 1377596600&lt;br/&gt;
ref 1 fl Interpret:/0/ffffffff rc -19/-1&lt;br/&gt;
Aug 27 11:41:40 n-oss08 kernel: LustreError: 4278:0 (ldlm_lib.c:2239:target_send_reply_msg()) Skipped 15 previous similar messages&lt;br/&gt;
Aug 27 11:42:30 n-oss08 kernel: LustreError: 137-5: UUID &apos;nero-OST001e_UUID&apos; is not available for connect (stopping)&lt;br/&gt;
Aug 27 11:42:30 n-oss08 kernel: LustreError: Skipped 709 previous similar messages&lt;br/&gt;
Aug 27 11:43:35 n-oss08 kernel: LustreError: 137-5: UUID &apos;nero-OST001e_UUID&apos; is not available for connect (stopping)&lt;/p&gt;

&lt;p&gt;Aug 27 11:43:35 n-oss08 kernel: LustreError: Skipped 1053 previous similar messages&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: INFO: task umount:18866 blocked for more than 120 seconds.&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot; disables this message.&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: umount        D 0000000000000008     0 18866  18812 0x00000080&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: ffff880198ce7998 0000000000000082 0000000000000000 ffffffffa06e6ee3&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: ffff880198ce7928 ffffffffa0436105 00000000000007ba ffffffffa06d9670&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: ffff8803f7467b38 ffff880198ce7fd8 000000000000f4e8 ffff8803f7467b38&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: Call Trace:&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0436105&amp;gt;&amp;#93;&lt;/span&gt; ? cfs_print_to_console+0x75/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff814edb95&amp;gt;&amp;#93;&lt;/span&gt; schedule_timeout+0x215/0x2e0&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff814ed813&amp;gt;&amp;#93;&lt;/span&gt; wait_for_common+0x123/0x180&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8105e7f0&amp;gt;&amp;#93;&lt;/span&gt; ? default_wake_function+0x0/0x20&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff810519c3&amp;gt;&amp;#93;&lt;/span&gt; ? __wake_up+0x53/0x70&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff814ed92d&amp;gt;&amp;#93;&lt;/span&gt; wait_for_completion+0x1d/0x20&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa06664a0&amp;gt;&amp;#93;&lt;/span&gt; target_stop_recovery_thread+0x50/0xa0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa066728e&amp;gt;&amp;#93;&lt;/span&gt; target_recovery_fini+0x1e/0x30 &lt;span class=&quot;error&quot;&gt;&amp;#91;ptlrpc&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa07c2266&amp;gt;&amp;#93;&lt;/span&gt; filter_precleanup+0xa6/0x470 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdfilter&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa051dd66&amp;gt;&amp;#93;&lt;/span&gt; ? class_disconnect_exports+0x126/0x220 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0531da9&amp;gt;&amp;#93;&lt;/span&gt; class_cleanup+0x199/0xa30 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04404f1&amp;gt;&amp;#93;&lt;/span&gt; ? libcfs_debug_msg+0x41/0x50 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0519db6&amp;gt;&amp;#93;&lt;/span&gt; ? class_name2dev+0x56/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0533303&amp;gt;&amp;#93;&lt;/span&gt; class_process_config+0xcc3/0x1670 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0437823&amp;gt;&amp;#93;&lt;/span&gt; ? cfs_alloc+0x63/0x90 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa052fc4b&amp;gt;&amp;#93;&lt;/span&gt; ? lustre_cfg_new+0x31b/0x640 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0533dfc&amp;gt;&amp;#93;&lt;/span&gt; class_manual_cleanup+0x14c/0x560 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0519db6&amp;gt;&amp;#93;&lt;/span&gt; ? class_name2dev+0x56/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa053f5ec&amp;gt;&amp;#93;&lt;/span&gt; server_put_super+0xaac/0xf40 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81191376&amp;gt;&amp;#93;&lt;/span&gt; ? invalidate_inodes+0xf6/0x190&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811786cb&amp;gt;&amp;#93;&lt;/span&gt; generic_shutdown_super+0x5b/0xe0&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811787b6&amp;gt;&amp;#93;&lt;/span&gt; kill_anon_super+0x16/0x60&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0535a06&amp;gt;&amp;#93;&lt;/span&gt; lustre_kill_super+0x36/0x60 &lt;span class=&quot;error&quot;&gt;&amp;#91;obdclass&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81179740&amp;gt;&amp;#93;&lt;/span&gt; deactivate_super+0x70/0x90&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff811956cf&amp;gt;&amp;#93;&lt;/span&gt; mntput_no_expire+0xbf/0x110&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8119616b&amp;gt;&amp;#93;&lt;/span&gt; sys_umount+0x7b/0x3a0&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff810d4582&amp;gt;&amp;#93;&lt;/span&gt; ? audit_syscall_entry+0x272/0x2a0&lt;br/&gt;
Aug 27 11:43:48 n-oss08 kernel: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100b0f2&amp;gt;&amp;#93;&lt;/span&gt; system_call_fastpath+0x16/0x1b&lt;br/&gt;
-------------------------------------------------------------------------&lt;/p&gt;

&lt;p&gt;Both the node were manually rebooted and they recovered their own resources but n-oss07 still continued to show a very long recovery time wrt OST nero-OST001e&lt;/p&gt;

&lt;p&gt;-------------------------------------------------------------------------&lt;br/&gt;
Aug 27 12:05:49 n-oss07 kernel: LustreError: 3851:0:(ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying&lt;br/&gt;
connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709551063s&lt;br/&gt;
Aug 27 12:06:14 n-oss07 kernel: LustreError: 3785:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709551038s&lt;br/&gt;
Aug 27 12:06:39 n-oss07 kernel: LustreError: 3762:0:(ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709551013s&lt;br/&gt;
Aug 27 12:07:04 n-oss07 kernel: LustreError: 3760:0:(ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709550988s&lt;br/&gt;
Aug 27 12:07:29 n-oss07 kernel: LustreError: 3851:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection&lt;br/&gt;
for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709550963s&lt;br/&gt;
Aug 27 12:07:54 n-oss07 kernel: LustreError: 3773:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709550938s&lt;br/&gt;
Aug 27 12:08:19 n-oss07 kernel: LustreError: 3851:0:(ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709550913s&lt;br/&gt;
Aug 27 12:09:09 n-oss07 kernel: LustreError: 3762:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001e: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 325 clients in recovery for 18446744073709550863s&lt;br/&gt;
-------------------------------------------------------------------------&lt;/p&gt;

&lt;p&gt;Same thing for the OST nero-OST001f on the n-oss08 node&lt;/p&gt;

&lt;p&gt;-------------------------------------------------------------------------&lt;br/&gt;
Aug 27 12:09:09 n-oss08 kernel: LustreError: 3821:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001f: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 360 clients in recovery for 18446744073709550945s&lt;br/&gt;
Aug 27 12:09:09 n-oss08 kernel: LustreError: 3821:0 (ldlm_lib.c:941:target_handle_connect())&lt;br/&gt;
Skipped 1 previous similar message&lt;br/&gt;
Aug 27 12:09:59 n-oss08 kernel: LustreError: 3739:0:(ldlm_lib.c:2239:target_send_reply_msg()) @@@ processing error (&lt;del&gt;16)  req@ffff88049d05a400 x1444516067280684/t0(0) o8&lt;/del&gt;&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 400/264 e 0 to 0 dl 1377598298 ref 1 fl Interpret:/0/0 rc -16/0&lt;br/&gt;
Aug 27 12:09:59 n-oss08 kernel: LustreError: 3739:0 (ldlm_lib.c:2239:target_send_reply_msg()) Skipped 10 previous similar messages&lt;br/&gt;
Aug 27 12:10:16 n-oss08 cib: &lt;span class=&quot;error&quot;&gt;&amp;#91;3236&amp;#93;&lt;/span&gt;: info: cib_stats: Processed 1 operations (0.00us average, 0% utilization) in the last 10min&lt;br/&gt;
Aug 27 12:10:24 n-oss08 kernel: LustreError: 3821:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001f: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 360 clients in recovery for 18446744073709550870s&lt;br/&gt;
Aug 27 12:10:24 n-oss08 kernel: LustreError: 3821:0:(ldlm_lib.c:941:target_handle_connect()) Skipped 2 previous similar messages&lt;br/&gt;
Aug 27 12:12:54 n-oss08 kernel: LustreError: 3821:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001f: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 360 clients in recovery for 18446744073709550720s&lt;br/&gt;
Aug 27 12:12:54 n-oss08 kernel: LustreError: 3821:0 (ldlm_lib.c:941:target_handle_connect()) Skipped 5 previous similar&lt;br/&gt;
messages&lt;br/&gt;
Aug 27 12:17:29 n-oss08 kernel: LustreError: 3843:0 (ldlm_lib.c:941:target_handle_connect()) nero-OST001f: denying connection for new client 10.201.32.31@o2ib (1439e1a4-2ede-caba-25fd-194822a1ec5a): 360 clients in recovery for 18446744073709550445s&lt;br/&gt;
Aug 27 12:17:29 n-oss08 kernel: LustreError: 3843:0 (ldlm_lib.c:941:target_handle_connect())&lt;br/&gt;
Skipped 10 previous similar messages&lt;br/&gt;
Aug 27 12:20:16 n-oss08 cib: &lt;span class=&quot;error&quot;&gt;&amp;#91;3236&amp;#93;&lt;/span&gt;: info: cib_stats: Processed 1 operations (0.00us average, 0% utilization) in the last 10min&lt;br/&gt;
-------------------------------------------------------------------------&lt;/p&gt;

&lt;p&gt;The only possible action to permit the mount of the OSTs was to manually abort the recovery procedure for nero-OST001e and nero-OST001f&lt;/p&gt;

&lt;p&gt;-------------------------------------------------------------------------&lt;br/&gt;
Aug 27 12:50:34 n-oss07 kernel: Lustre: nero-OST001e: Now serving nero-OST001e on /dev/mapper/vd07 with recovery enabled&lt;br/&gt;
Aug 27 12:50:34 n-oss07 kernel: Lustre: nero-OST001e: Aborting recovery.&lt;br/&gt;
Aug 27 12:50:34 n-oss07 kernel: Lustre: nero-OST001e: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 &lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 12:54:45 n-oss07 kernel: Lustre: Failing over nero-OST001e&lt;br/&gt;
Aug 27 12:54:46 n-oss07 kernel: LustreError: 3814:0 (ldlm_request.c:1170:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway&lt;br/&gt;
Aug 27 12:54:46 n-oss07 kernel: Lustre: nero-OST001e: shutting down for failover; client state will be preserved.&lt;br/&gt;
Aug 27 12:54:46 n-oss07 kernel: LustreError: 3814:0 (ldlm_request.c:1796:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108 Aug 27 12:54:46 n-oss07 kernel: Lustre: OST nero-OST001e has stopped.&lt;br/&gt;
Aug 27 12:54:49 n-oss07 kernel: Lustre: server umount nero-OST001e complete&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 12:56:40 n-oss07 Filesystem&lt;span class=&quot;error&quot;&gt;&amp;#91;5002&amp;#93;&lt;/span&gt;: INFO: Running start for /dev/mapper/vd07 on /lustre/vd07&lt;br/&gt;
Aug 27 12:56:40 n-oss07 kernel: Lustre: nero-OST0016: Will be in recovery for at least 2:30, or until 746 clients reconnect&lt;br/&gt;
Aug 27 12:56:40 n-oss07 kernel: LDISKFS-fs (dm-7): warning: maximal mount count reached, running e2fsck is recommended&lt;br/&gt;
Aug 27 12:56:40 n-oss07 kernel: LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts:&lt;br/&gt;
Aug 27 12:56:40 n-oss07 kernel: LDISKFS-fs (dm-7): warning: maximal mount count reached, running e2fsck is recommended&lt;br/&gt;
Aug 27 12:56:40 n-oss07 kernel: LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts:&lt;br/&gt;
Aug 27 12:56:40 n-oss07 kernel: Lustre: 5111:0 (ldlm_lib.c:2019:target_recovery_init()) RECOVERY: service nero-OST001e,&lt;br/&gt;
746 recoverable clients, last_transno 734586500&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 12:56:41 n-oss07 kernel: Lustre: nero-OST001e: Will be in recovery for at least 2:30, or until 746 clients reconnect&lt;br/&gt;
-------------------------------------------------------------------------&lt;/p&gt;

&lt;p&gt;-------------------------------------------------------------------------&lt;br/&gt;
Aug 27 12:53:12 n-oss08 kernel: Lustre: nero-OST001f: Now serving nero-OST001f on /dev/mapper/vd08 with recovery enabled&lt;br/&gt;
Aug 27 12:53:13 n-oss08 kernel: Lustre: nero-OST001f: Aborting recovery.&lt;br/&gt;
Aug 27 12:53:13 n-oss08 kernel: Lustre: nero-OST001f: Imperative Recovery enabled, recovery window shrunk from 300-900 down to 150-450 Aug 27 12:53:20 n-oss08 kernel: LustreError: 137-5: UUID &apos;nero-OST000e_UUID&apos; is not available for connect (no target)&lt;br/&gt;
Aug 27 12:53:20 n-oss08 kernel: LustreError: 3402:0 (ldlm_lib.c:2239:target_send_reply_msg()) @@@ processing error (&lt;del&gt;19)  req@ffff8805ff3f4400 x1439334118204356/t0(0) o8&lt;/del&gt;&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 400/0 e 0 to 0 dl 1377600899 ref 1 fl Interpret:/0/ffffffff rc -19/-1&lt;br/&gt;
Aug 27 12:53:20 n-oss08 kernel: LustreError: 3402:0 (ldlm_lib.c:2239:target_send_reply_msg()) Skipped 743 previous similar messages&lt;br/&gt;
Aug 27 12:53:20 n-oss08 kernel: LustreError: Skipped 676 previous similar messages&lt;br/&gt;
Aug 27 12:53:21 n-oss08 kernel: Lustre: 3430:0 (filter.c:2691:filter_connect_internal()) nero-OST001f: Received MDS connection for group 0&lt;br/&gt;
Aug 27 12:53:21 n-oss08 kernel: Lustre: nero-OST001f: received MDS connection from 10.201.62.14@o2ib&lt;br/&gt;
Aug 27 12:53:22 n-oss08 kernel: Lustre: 3412:0 (filter.c:2547:filter_llog_connect()) nero-OST001f: Recovery from log 0xf9f0621/0x0:c4b6c135&lt;br/&gt;
Aug 27 12:53:52 n-oss08 kernel: LustreError: 137-5: UUID &apos;nero-OST0006_UUID&apos; is not available for connect (no target)&lt;br/&gt;
Aug 27 12:53:52 n-oss08 kernel: LustreError: 3458:0 (ldlm_lib.c:2239:target_send_reply_msg()) @@@ processing error (-19)&lt;br/&gt;
req@ffff8805f7623800 x1443956471336464/t0(0) o8-&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 400/0 e 0 to 0 dl 1377600931 ref 1 fl Interpret:/0/ffffffff rc -19/-1&lt;br/&gt;
Aug 27 12:53:52 n-oss08 kernel: LustreError: 3458:0 (ldlm_lib.c:2239:target_send_reply_msg()) Skipped 2056 previous similar messages&lt;br/&gt;
Aug 27 12:53:52 n-oss08 kernel: LustreError: Skipped 2120 previous similar messages&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 12:54:37 n-oss08 kernel: Lustre: Failing over nero-OST001f Aug 27 12:54:39 n-oss08 kernel: LustreError: 3734:0 (ldlm_request.c:1170:ldlm_cli_cancel_req()) Got rc&lt;br/&gt;
-108 from cancel RPC: canceling anyway&lt;br/&gt;
Aug 27 12:54:39 n-oss08 kernel: Lustre: nero-OST001f: shutting down for failover; client state will be preserved.&lt;br/&gt;
Aug 27 12:54:39 n-oss08 kernel: LustreError: 3734:0 (ldlm_request.c:1796:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108 Aug 27 12:54:39 n-oss08 kernel: Lustre: OST nero-OST001f has stopped. Aug 27 12:54:42 n-oss08 kernel: Lustre: server umount nero-OST001f complete&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;br/&gt;
Aug 27 12:56:40 n-oss08 Filesystem&lt;span class=&quot;error&quot;&gt;&amp;#91;4734&amp;#93;&lt;/span&gt;: INFO: Running start for /dev/mapper/vd08 on /lustre/vd08&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: LDISKFS-fs (dm-7): warning: maximal mount count reached, running e2fsck is recommended&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts:&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: LDISKFS-fs (dm-7): warning: maximal mount count reached, running e2fsck is recommended&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: LDISKFS-fs (dm-7): mounted filesystem with ordered data mode. Opts:&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: Lustre: 4843:0 (ldlm_lib.c:2019:target_recovery_init()) RECOVERY: service nero-OST001f,&lt;br/&gt;
746 recoverable clients, last_transno 730833312&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: Lustre: nero-OST001f: Now serving nero-OST001f on /dev/mapper/vd08 with recovery enabled&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: Lustre: nero-OST001f: temporarily refusing client connection from 10.201.51.36@o2ib&lt;br/&gt;
Aug 27 12:56:40 n-oss08 kernel: Lustre: Skipped 42 previous similar messages&lt;br/&gt;
Aug 27 12:56:41 n-oss08 lrmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;3762&amp;#93;&lt;/span&gt;: info: Managed fs_vd08:start process 4734 exited with return code 0.&lt;br/&gt;
Aug 27 12:56:41 n-oss08 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;3765&amp;#93;&lt;/span&gt;: info: process_lrm_event: LRM operation fs_vd08_start_0 (call=17, rc=0, cib-update=24, confirmed=true) ok&lt;br/&gt;
Aug 27 12:56:41 n-oss08 kernel: Lustre: nero-OST001f: Will be in recovery for at least 2:30, or until 746 clients reconnect&lt;br/&gt;
-------------------------------------------------------------------------&lt;/p&gt;</description>
                <environment>Linux Kernel version 2.6.32-220.4.2.el6_lustre.x86_64 (&lt;a href=&apos;mailto:jenkins@client-31.lab.whamcloud.com&apos;&gt;jenkins@client-31.lab.whamcloud.com&lt;/a&gt;) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Wed Mar 14 13:03:47 PDT 2012&lt;br/&gt;
&lt;br/&gt;
Lustre Build Version: 2.2.0-RC2--PRISTINE-2.6.32-220.4.2.el6_lustre.x86_64&lt;br/&gt;
&lt;br/&gt;
Processors: 2xIntel(R) Xeon(R) CPU E5645  @ 2.40GHz  - Ram: 48GB</environment>
        <key id="20932">LU-3940</key>
            <summary>Clients in recovery for a very long time - LustreError: (ldlm_lib.c:941:target_handle_connect())</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="matteo.piccinini">Matteo Piccinini</reporter>
                        <labels>
                    </labels>
                <created>Thu, 12 Sep 2013 16:09:09 +0000</created>
                <updated>Wed, 16 Oct 2013 09:06:09 +0000</updated>
                            <resolved>Wed, 16 Oct 2013 09:06:09 +0000</resolved>
                                    <version>Lustre 2.2.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="66524" author="bfaccini" created="Thu, 12 Sep 2013 17:00:19 +0000"  >&lt;p&gt;Hello Matteo,&lt;br/&gt;
Do you have any idea of what caused the original n-oss07 hang ? Is there any infos available about it (crash-dump, stack-traces, ...) ??&lt;br/&gt;
Also, what is not clear for me is did n-oss08 fully take-over before n-oss07 was rebooted and took control back ??&lt;br/&gt;
Again, any infos (crash-dump, stack-traces, ...) available for next hangs/reboots ??&lt;/p&gt;

</comment>
                            <comment id="66589" author="matteo.piccinini" created="Fri, 13 Sep 2013 14:08:21 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;just to clarify see my inline reply.&lt;/p&gt;

&lt;p&gt;&amp;gt; Do you have any idea of what caused the original n-oss07 hang? Is there any infos available about it (crash-dump, stack-traces, ...)??&lt;/p&gt;

&lt;p&gt;Unfortunately no.&lt;/p&gt;

&lt;p&gt;&amp;gt; Also, what is not clear for me is did n-oss08 fully take-over before n-oss07 was rebooted and took control back ??&lt;/p&gt;

&lt;p&gt;Yes, I can confirm that node n-oss08 took-over all the resources from n-oss07, as shown in the messages log file extracted below.&lt;br/&gt;
But when n-oss07 took control back the first resource to migrate from the n-oss08 was the fs_vd07 (nero-OST0001e) mounted with the very long time recovery thus hanging the node.&lt;/p&gt;

&lt;p&gt;---------------------------------------------------------------------------------------------------------------------------------------&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: group_print:  Resource Group: oss_odd&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd01#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd03#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd05#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd07#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: group_print:  Resource Group: oss_even&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd02#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd04#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd06#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: info: native_print: fs_vd08#011(ocf::heartbeat:Filesystem):#011Started n-oss08.hpc-net.ethz.ch&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Move fs_vd01#011(Started n-oss08.hpc-net.ethz.ch -&amp;gt; n-oss07.hpc-net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Move fs_vd03#011(Started n-oss08.hpc-net.ethz.ch -&amp;gt; n-oss07.hpc-net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Move fs_vd05#011(Started n-oss08.hpc-net.ethz.ch -&amp;gt; n-oss07.hpc-net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Move fs_vd07#011(Started n-oss08.hpc-net.ethz.ch -&amp;gt; n-oss07.hpc net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Leave   fs_vd02#011(Started n-oss08.hpc-net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Leave   fs_vd04#011(Started n-oss08.hpc-net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Leave   fs_vd06#011(Started n-oss08.hpc-net.ethz.ch)&lt;br/&gt;
Aug 27 11:41:20 n-oss08 pengine: &lt;span class=&quot;error&quot;&gt;&amp;#91;3794&amp;#93;&lt;/span&gt;: notice: LogActions: Leave   fs_vd08#011(Started n-oss08.hpc-net.ethz.ch)&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Aug 27 11:41:21 n-oss08 crmd: &lt;span class=&quot;error&quot;&gt;&amp;#91;3795&amp;#93;&lt;/span&gt;: info: te_rsc_command: Initiating action 26: stop fs_vd07_stop_0 on n-oss08.hpc-net.ethz.ch (local)&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;...&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Aug 27 11:41:21 n-oss08 Filesystem&lt;span class=&quot;error&quot;&gt;&amp;#91;18812&amp;#93;&lt;/span&gt;: INFO: Running stop for /dev/mapper/vd07 on /lustre/vd07&lt;br/&gt;
Aug 27 11:41:21 n-oss08 Filesystem&lt;span class=&quot;error&quot;&gt;&amp;#91;18812&amp;#93;&lt;/span&gt;: INFO: Trying to unmount /lustre/vd07&lt;br/&gt;
Aug 27 11:41:21 n-oss08 kernel: Lustre: Failing over nero-OST001e&lt;br/&gt;
Aug 27 11:41:21 n-oss08 kernel: LustreError: 18866:0:(ldlm_lib.c:1978:target_stop_recovery_thread()) nero-OST001e: Aborting recovery&lt;br/&gt;
Aug 27 11:41:40 n-oss08 kernel: LustreError: 137-5: UUID &apos;nero-OST001e_UUID&apos; is not available for connect (stopping)&lt;br/&gt;
Aug 27 11:41:40 n-oss08 kernel: LustreError: 4278:0:(ldlm_lib.c:2239:target_send_reply_msg()) @@@ processing error (&lt;del&gt;19) req@ffff8805a769d800 x1443813942539660/t0(0) o8&lt;/del&gt;&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 400/0 e 0 to 0 dl 1377596600 ref 1 fl Interpret:/0/ffffffff rc -19/-1&lt;br/&gt;
-----------------------------------------------------------------------------------------------------------------------------------------&lt;/p&gt;

&lt;p&gt;&amp;gt; Again, any infos (crash-dump, stack-traces, ...) available for next hangs/reboots ??&lt;/p&gt;

&lt;p&gt;Again!&lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/sad.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;, unfortunately no crash dumps available.&lt;/p&gt;</comment>
                            <comment id="68043" author="bfaccini" created="Tue, 1 Oct 2013 13:12:36 +0000"  >&lt;p&gt;I can&apos;t find the reason of the so long recovery, but this may be due to some issue on Clients side.&lt;br/&gt;
BTW, your current configuration with 2.4 Clients and 2.2 Servers is not a supported nor tested one so we may face some interop issue there ....&lt;/p&gt;</comment>
                            <comment id="68976" author="bfaccini" created="Tue, 15 Oct 2013 12:20:15 +0000"  >&lt;p&gt;Hello Matteo,&lt;br/&gt;
Have you been able to find any infos/msgs in Clients logs/Consoles ??&lt;br/&gt;
On the other hand, if the issue did not reproduce, can we at least downgrade this ticket&apos;s priority ?&lt;/p&gt;</comment>
                            <comment id="69089" author="matteo.piccinini" created="Wed, 16 Oct 2013 08:58:02 +0000"  >&lt;p&gt;Hello Bruno,&lt;/p&gt;

&lt;p&gt;We cannot found any other infos on the system and fortunately the issue did not reproduce.&lt;br/&gt;
We can close the ticket.&lt;br/&gt;
Thanks.&lt;/p&gt;</comment>
                            <comment id="69090" author="bfaccini" created="Wed, 16 Oct 2013 09:06:09 +0000"  >&lt;p&gt;Thank&apos;s for the update Matteo. So closing ticket as not-reproducible.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="13459" name="var.log.messages.tar.gz" size="412210" author="matteo.piccinini" created="Thu, 12 Sep 2013 16:09:09 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzw23r:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>10422</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>