<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:30:11 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3010] client crashes on RHEL6 with Lustre 1.8.8</title>
                <link>https://jira.whamcloud.com/browse/LU-3010</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After upgrading the clients at NOAA to RHEL6 and Lustre 1.8.8, we&apos;re running into an issue where we are seeing kernel panics. Here is the analysis by Redhat:&lt;/p&gt;

&lt;p&gt;I have checked the provided vmcore and below is my diagnosis on same . . It &lt;br/&gt;
seems like libcfs module is leading to crash due to NULL pointer dereference. &lt;br/&gt;
libcfs is a 3rd party module not shipped by Red Hat and probably it is provided &lt;br/&gt;
by Lustre. Please check with Lustre application vendor for further investigation &lt;br/&gt;
on this. Below is more Detail analysis from vmcore . .&lt;/p&gt;

&lt;p&gt;Go through below shared analysis and contact Lustre team . . If there is any &lt;br/&gt;
further assistance required from OS side do let us know. .&lt;/p&gt;

&lt;p&gt;~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; OOPS at &quot;kiblnd_sd_04&quot;&lt;/p&gt;

&lt;p&gt;       KERNEL: vmlinux&lt;br/&gt;
     DUMPFILE: vmcore.flat  &lt;span class=&quot;error&quot;&gt;&amp;#91;PARTIAL DUMP&amp;#93;&lt;/span&gt;&lt;br/&gt;
         CPUS: 24&lt;br/&gt;
         DATE: Thu Mar  7 07:10:37 2013&lt;br/&gt;
       UPTIME: 1 days, 11:34:39&lt;br/&gt;
LOAD AVERAGE: 6.40, 6.05, 6.59&lt;br/&gt;
        TASKS: 655&lt;br/&gt;
     NODENAME: r10i0n5&lt;br/&gt;
      RELEASE: 2.6.32-279.5.2.el6.x86_64&lt;br/&gt;
      VERSION: #1 SMP Tue Aug 14 11:36:39 EDT 2012&lt;br/&gt;
      MACHINE: x86_64  (3466 Mhz)&lt;br/&gt;
       MEMORY: 24 GB&lt;br/&gt;
        PANIC: &quot;Oops: 0010 &lt;a href=&quot;#1&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;1&lt;/a&gt; SMP &quot; (check log for details)&lt;br/&gt;
          PID: 2889&lt;br/&gt;
      COMMAND: &quot;kiblnd_sd_04&quot;&lt;br/&gt;
         TASK: ffff8803263b7540  &lt;span class=&quot;error&quot;&gt;&amp;#91;THREAD_INFO: ffff880325c14000&amp;#93;&lt;/span&gt;&lt;br/&gt;
          CPU: 12&lt;br/&gt;
        STATE: TASK_RUNNING (PANIC)&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Following PID was the one which probably HIT the panic.&lt;br/&gt;
crash&amp;gt; ps | grep 2889&lt;br/&gt;
&amp;gt;  2889      2  12  ffff8803263b7540  RU   0.0       0      0  &lt;span class=&quot;error&quot;&gt;&amp;#91;kiblnd_sd_04&amp;#93;&lt;/span&gt;&lt;br/&gt;
~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; Seems like there was some resource constraint for the concern PID 2889.&lt;br/&gt;
crash&amp;gt; bt&lt;br/&gt;
PID: 2889   TASK: ffff8803263b7540  CPU: 12  COMMAND: &quot;kiblnd_sd_04&quot;&lt;br/&gt;
  #0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c155a0&amp;#93;&lt;/span&gt; machine_kexec at ffffffff8103281b&lt;br/&gt;
  #1 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15600&amp;#93;&lt;/span&gt; crash_kexec at ffffffff810ba8e2&lt;br/&gt;
  #2 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c156d0&amp;#93;&lt;/span&gt; oops_end at ffffffff81501510&lt;br/&gt;
  #3 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15700&amp;#93;&lt;/span&gt; no_context at ffffffff81043bab&lt;br/&gt;
  #4 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15750&amp;#93;&lt;/span&gt; __bad_area_nosemaphore at ffffffff81043e35&lt;br/&gt;
  #5 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c157a0&amp;#93;&lt;/span&gt; bad_area_nosemaphore at ffffffff81043f03&lt;br/&gt;
  #6 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c157b0&amp;#93;&lt;/span&gt; __do_page_fault at ffffffff81044661&lt;br/&gt;
  #7 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c158d0&amp;#93;&lt;/span&gt; do_page_fault at ffffffff815034ee&lt;br/&gt;
  #8 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15900&amp;#93;&lt;/span&gt; page_fault at ffffffff815008a5&lt;br/&gt;
  #9 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15a58&amp;#93;&lt;/span&gt; libcfs_debug_dumpstack at ffffffffa04808f5 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
#10 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15a78&amp;#93;&lt;/span&gt; lbug_with_loc at ffffffffa0480f25 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
#11 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15ac8&amp;#93;&lt;/span&gt; libcfs_assertion_failed at ffffffffa0489696 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
#12 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15b18&amp;#93;&lt;/span&gt; lnet_match_md at ffffffffa04e7cdc &lt;span class=&quot;error&quot;&gt;&amp;#91;lnet&amp;#93;&lt;/span&gt;&lt;br/&gt;
#13 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15be8&amp;#93;&lt;/span&gt; lnet_parse at ffffffffa04ece8a &lt;span class=&quot;error&quot;&gt;&amp;#91;lnet&amp;#93;&lt;/span&gt;&lt;br/&gt;
#14 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15ce8&amp;#93;&lt;/span&gt; kiblnd_handle_rx at ffffffffa06ca2fb &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
#15 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15d78&amp;#93;&lt;/span&gt; kiblnd_rx_complete at ffffffffa06caea2 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
#16 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15df8&amp;#93;&lt;/span&gt; kiblnd_complete at ffffffffa06cb092 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
#17 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15e38&amp;#93;&lt;/span&gt; kiblnd_scheduler at ffffffffa06cb3af &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
#18 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880325c15f48&amp;#93;&lt;/span&gt; kernel_thread at ffffffff8100c14a&lt;br/&gt;
~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; I see few segfaults and LustreError prior to crash . .&lt;/p&gt;

&lt;p&gt;crash&amp;gt; log&lt;/p&gt;

&lt;p&gt;PROLOGUE-CHKNODE Wed Mar  6 18:55:45 UTC 2013 Job 16422368: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16422368.bqs1.zeus.fairmont.rdhp&lt;br/&gt;
cs.noaa.gov&lt;br/&gt;
remap_polar_net&lt;span class=&quot;error&quot;&gt;&amp;#91;778&amp;#93;&lt;/span&gt;: segfault at 100000009 ip 0000000000410f6c sp &lt;br/&gt;
00007fffffffd4f0 error 4 in remap_polar_netcdf.exe&lt;span class=&quot;error&quot;&gt;&amp;#91;400000+16c000&amp;#93;&lt;/span&gt;&lt;br/&gt;
EPILOGUE-CHKNODE Wed Mar  6 18:55:48 UTC 2013 Job 16422363: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16422363.bqs1.zeus.fairmont.rdhp&lt;br/&gt;
cs.noaa.gov&lt;br/&gt;
EPILOGUE-CHKNODE Wed Mar  6 18:55:51 UTC 2013 Job 16422308: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16422308.bqs1.zeus.fairmont.rdhp&lt;br/&gt;
cs.noaa.gov&lt;br/&gt;
remap_polar_net&lt;span class=&quot;error&quot;&gt;&amp;#91;1382&amp;#93;&lt;/span&gt;: segfault at 100000009 ip 0000000000410f6c sp &lt;br/&gt;
00007fffffffd530 error 4 in remap_polar_netcdf.exe&lt;span class=&quot;error&quot;&gt;&amp;#91;400000+16c000&amp;#93;&lt;/span&gt;&lt;br/&gt;
EPILOGUE-CHKNODE Wed Mar  6 18:55:56 UTC 2013 Job 16422368: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16422368.bqs1.zeus.fairmont.rdhpcs.noaa.gov&lt;br/&gt;
.&lt;br/&gt;
.&lt;br/&gt;
.&lt;br/&gt;
.&lt;br/&gt;
PROLOGUE-CHKNODE Wed Mar  6 22:41:51 UTC 2013 Job 16433865: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16433865.bqs1.zeus.fairmont.rdhpcs.noaa.gov&lt;br/&gt;
LustreError: 12575:0:(file.c:3331:ll_inode_revalidate_fini()) failure -2 inode &lt;br/&gt;
406359028&lt;br/&gt;
LustreError: 12905:0:(file.c:3331:ll_inode_revalidate_fini()) failure -2 inode &lt;br/&gt;
406847678&lt;br/&gt;
EPILOGUE-CHKNODE Wed Mar  6 22:43:48 UTC 2013 Job 16433865: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16433865.bqs1.zeus.fairmont.rdhpcs.noaa.gov&lt;br/&gt;
.&lt;br/&gt;
.&lt;br/&gt;
.&lt;br/&gt;
.&lt;br/&gt;
PROLOGUE-CHKNODE Thu Mar  7 11:53:11 UTC 2013 Job 16479113: chk_node.pl &lt;br/&gt;
2.3-ad-2012.06.22. Starting Checks ... 16479113.bqs1.zeus.fairmont.rdhpcs.noaa.gov&lt;br/&gt;
LustreError: 2889:0:(lib-move.c:184:lnet_match_md()) ASSERTION(me == md-&amp;gt;md_me) &lt;br/&gt;
failed&lt;br/&gt;
LustreError: 2889:0:(lib-move.c:184:lnet_match_md()) LBUG&lt;br/&gt;
Pid: 2889, comm: kiblnd_sd_04&lt;br/&gt;
Call Trace:&lt;br/&gt;
BUG: unable to handle kernel NULL pointer dereference at (null)&lt;br/&gt;
IP: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;(null)&amp;gt;&amp;#93;&lt;/span&gt; (null)&lt;br/&gt;
PGD 334320067 PUD 336739067 PMD 0&lt;br/&gt;
Oops: 0010 &lt;a href=&quot;#1&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;1&lt;/a&gt; SMP&lt;br/&gt;
last sysfs file: &lt;br/&gt;
/sys/devices/pci0000:00/0000:00:09.0/0000:02:00.0/infiniband/mlx4_1/hca_type&lt;br/&gt;
CPU 12&lt;br/&gt;
Modules linked in: mgc(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U) ko2iblnd(U) &lt;br/&gt;
ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) acpi_cpufreq freq_table mperf &lt;br/&gt;
ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_addr ib_sa &lt;br/&gt;
mlx4_ib ib_mad iw_cxgb4 iw_cxgb3 ib_core xpmem(U) xp gru xvma(U) numatools(U) &lt;br/&gt;
microcode serio_raw i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support ioatdma &lt;br/&gt;
shpchp ahci mlx4_en mlx4_core igb dca dm_mirror dm_region_hash dm_log dm_mod nfs &lt;br/&gt;
lockd fscache nfs_acl auth_rpcgss sunrpc be2iscsi bnx2i cnic uio ipv6 cxgb4i &lt;br/&gt;
cxgb4 cxgb3i libcxgbi cxgb3 mdio libiscsi_tcp qla4xxx iscsi_boot_sysfs libiscsi &lt;br/&gt;
scsi_transport_iscsi &lt;span class=&quot;error&quot;&gt;&amp;#91;last unloaded: ipmi_msghandler&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Pid: 2889, comm: kiblnd_sd_04 Not tainted 2.6.32-279.5.2.el6.x86_64 #1 SGI.COM &lt;br/&gt;
AltixICE8400IP105/X8DTT-HallieS&lt;br/&gt;
RIP: 0010:&lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;0000000000000000&amp;gt;&amp;#93;&lt;/span&gt;  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;(null)&amp;gt;&amp;#93;&lt;/span&gt; (null)&lt;br/&gt;
RSP: 0018:ffff880325c159b8  EFLAGS: 00010246&lt;br/&gt;
RAX: ffff880325c15a1c RBX: ffff880325c15a10 RCX: ffffffffa048c320&lt;br/&gt;
RDX: ffff880325c15a50 RSI: ffff880325c15a10 RDI: ffff880325c14000&lt;br/&gt;
RBP: ffff880325c15a50 R08: 0000000000000000 R09: 0000000000000000&lt;br/&gt;
R10: 0000000000000003 R11: 0000000000000000 R12: 000000000000cbe0&lt;br/&gt;
R13: ffffffffa048c320 R14: 0000000000000000 R15: ffff8800282c3fc0&lt;br/&gt;
FS:  00007ffff7fe8700(0000) GS:ffff8800282c0000(0000) knlGS:0000000000000000&lt;br/&gt;
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b&lt;br/&gt;
CR2: 0000000000000000 CR3: 000000032c5fd000 CR4: 00000000000006e0&lt;br/&gt;
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000&lt;br/&gt;
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400&lt;br/&gt;
Process kiblnd_sd_04 (pid: 2889, threadinfo ffff880325c14000, task ffff8803263b7540)&lt;br/&gt;
Stack:&lt;br/&gt;
  ffffffff8100e520 ffff880325c15a1c ffff8803263b7540 ffffffffa04f8914&lt;br/&gt;
&amp;lt;d&amp;gt; 00000000a04fb8d8 ffff880325c14000 ffff880325c15fd8 ffff880325c14000&lt;br/&gt;
&amp;lt;d&amp;gt; 000000000000000c ffff8800282c0000 ffff880325c15a50 ffff880325c15a20&lt;br/&gt;
Call Trace:&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100e520&amp;gt;&amp;#93;&lt;/span&gt; ? dump_trace+0x190/0x3b0&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04808f5&amp;gt;&amp;#93;&lt;/span&gt; libcfs_debug_dumpstack+0x55/0x80 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0480f25&amp;gt;&amp;#93;&lt;/span&gt; lbug_with_loc+0x75/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0489696&amp;gt;&amp;#93;&lt;/span&gt; libcfs_assertion_failed+0x66/0x70 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04e7cdc&amp;gt;&amp;#93;&lt;/span&gt; lnet_match_md+0x2fc/0x350 &lt;span class=&quot;error&quot;&gt;&amp;#91;lnet&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa048172e&amp;gt;&amp;#93;&lt;/span&gt; ? cfs_free+0xe/0x10 &lt;span class=&quot;error&quot;&gt;&amp;#91;libcfs&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04f73af&amp;gt;&amp;#93;&lt;/span&gt; ? lnet_nid2peer_locked+0x2f/0x540 &lt;span class=&quot;error&quot;&gt;&amp;#91;lnet&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa04ece8a&amp;gt;&amp;#93;&lt;/span&gt; lnet_parse+0x108a/0x1b30 &lt;span class=&quot;error&quot;&gt;&amp;#91;lnet&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa06ca2fb&amp;gt;&amp;#93;&lt;/span&gt; kiblnd_handle_rx+0x2cb/0x600 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8104f2bd&amp;gt;&amp;#93;&lt;/span&gt; ? check_preempt_curr+0x6d/0x90&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff810600bc&amp;gt;&amp;#93;&lt;/span&gt; ? try_to_wake_up+0x24c/0x3e0&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa06caea2&amp;gt;&amp;#93;&lt;/span&gt; kiblnd_rx_complete+0x252/0x3e0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81060262&amp;gt;&amp;#93;&lt;/span&gt; ? default_wake_function+0x12/0x20&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8104e309&amp;gt;&amp;#93;&lt;/span&gt; ? __wake_up_common+0x59/0x90&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa06cb092&amp;gt;&amp;#93;&lt;/span&gt; kiblnd_complete+0x62/0xe0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa06cb3af&amp;gt;&amp;#93;&lt;/span&gt; kiblnd_scheduler+0x29f/0x760 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81127e40&amp;gt;&amp;#93;&lt;/span&gt; ? __free_pages+0x60/0xa0&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff81060250&amp;gt;&amp;#93;&lt;/span&gt; ? default_wake_function+0x0/0x20&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c14a&amp;gt;&amp;#93;&lt;/span&gt; child_rip+0xa/0x20&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa06cb110&amp;gt;&amp;#93;&lt;/span&gt; ? kiblnd_scheduler+0x0/0x760 &lt;span class=&quot;error&quot;&gt;&amp;#91;ko2iblnd&amp;#93;&lt;/span&gt;&lt;br/&gt;
  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffff8100c140&amp;gt;&amp;#93;&lt;/span&gt; ? child_rip+0x0/0x20&lt;br/&gt;
Code:  Bad RIP value.&lt;br/&gt;
RIP  &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;(null)&amp;gt;&amp;#93;&lt;/span&gt; (null)&lt;br/&gt;
  RSP &amp;lt;ffff880325c159b8&amp;gt;&lt;br/&gt;
CR2: 0000000000000000&lt;br/&gt;
~~~~~~~~~~~~~~~~~~~~~~~~~~&lt;/p&gt;</description>
                <environment></environment>
        <key id="18052">LU-3010</key>
            <summary>client crashes on RHEL6 with Lustre 1.8.8</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="isaac">Isaac Huang</assignee>
                                    <reporter username="kitwestneat">Kit Westneat</reporter>
                        <labels>
                    </labels>
                <created>Thu, 21 Mar 2013 21:11:41 +0000</created>
                <updated>Wed, 6 Nov 2013 17:47:00 +0000</updated>
                            <resolved>Wed, 6 Nov 2013 17:47:00 +0000</resolved>
                                    <version>Lustre 1.8.8</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>10</watches>
                                                                            <comments>
                            <comment id="54610" author="kitwestneat" created="Thu, 21 Mar 2013 21:13:13 +0000"  >&lt;p&gt;Here are the messages:&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/DDN-SR19805-r7i1n8.messages.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/DDN-SR19805-r7i1n8.messages.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before the crash there appears to be a network problem:&lt;br/&gt;
Mar 11 13:39:47 r7i1n8 kernel: LustreError: 3229:0:(o2iblnd_cb.c:2914:kiblnd_check_txs()) Timed out tx: tx_queue, 3 seconds&lt;br/&gt;
Mar 11 13:39:47 r7i1n8 kernel: LustreError: 3229:0:(o2iblnd_cb.c:2977:kiblnd_check_conns()) Timed out RDMA with 10.175.31.242@o2ib4 (65)&lt;/p&gt;


&lt;p&gt;Here is the vmcore:&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/DDN-SR19805-r7i1n8.dump.tar.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/DDN-SR19805-r7i1n8.dump.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me know if there is any more information I can provide.&lt;/p&gt;</comment>
                            <comment id="54617" author="green" created="Thu, 21 Mar 2013 22:40:26 +0000"  >&lt;p&gt;is this from our rpms or did you build it yourself? If you built it, we also need kernel-debuginfo rpm and lustre-modules rpm to make the crashdump useful.&lt;/p&gt;</comment>
                            <comment id="54618" author="green" created="Thu, 21 Mar 2013 22:43:39 +0000"  >&lt;p&gt;Also the assertion is the same as old bugzilla 14238 that we landed a patch for quite a while ago (patch by Isaac &lt;a href=&quot;http://git.whamcloud.com/gitweb?p=fs/lustre-dev.git;a=commitdiff;h=85f59695534ddd167fa491c091ed64b1504cdaf7&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://git.whamcloud.com/gitweb?p=fs/lustre-dev.git;a=commitdiff;h=85f59695534ddd167fa491c091ed64b1504cdaf7&lt;/a&gt; )&lt;/p&gt;</comment>
                            <comment id="54626" author="doug" created="Fri, 22 Mar 2013 01:44:14 +0000"  >&lt;p&gt;I&apos;m seeing two things:&lt;/p&gt;

&lt;p&gt;1- The log &quot;Timed out RDMA..&quot; is indicating that we have not seen any activity from 10.175.31.242@o2ib4 for 65 seconds.  As such, we are giving up on it and closing the connection.&lt;/p&gt;

&lt;p&gt;2- There is only one assert in lnet_match_md() and that is when we see something unexpected with our MD queue.  I am assuming this has happened as a result of the closed connection.  But, that is no reason to assert (if something can feasibly happen, we should not be asserting).&lt;/p&gt;

&lt;p&gt;I checked the latest code tree and there seems to be some changes to how locks work in this area of code.  It may be possible there is a race condition here.&lt;/p&gt;

&lt;p&gt;I need to check with a couple of other devs more familiar with this area of code to get their opinion.&lt;/p&gt;</comment>
                            <comment id="54633" author="liang" created="Fri, 22 Mar 2013 03:14:20 +0000"  >&lt;p&gt;first, the assertion is reasonable at here, no matter what happened, @me has to equal to @me-&amp;gt;me_md-&amp;gt;md_me unless me-&amp;gt;me_md is NULL, that&apos;s the way we implement this. Because locking changes are only on 2.3 or later, and this bug is on 1.8.8, so at least it&apos;s not a new race condition, and unfortunately I didn&apos;t see any race even when I make those lock changes.&lt;br/&gt;
I believe the first time we saw this is bz11130, and there is no fix for it...&lt;br/&gt;
let&apos;s see what Isaac will comment for this, and my suggestion is to change LASSERT to LASSERTF and try to see if &quot;me&quot;, &quot;me-&amp;gt;me_md&quot;, &quot;me-&amp;gt;me_md-&amp;gt;md_me&quot; are not polluted by something else.&lt;/p&gt;</comment>
                            <comment id="54635" author="green" created="Fri, 22 Mar 2013 03:41:36 +0000"  >&lt;p&gt;Liang, there&apos;s a crashdump available, so we can check all the values there.&lt;/p&gt;</comment>
                            <comment id="54644" author="kitwestneat" created="Fri, 22 Mar 2013 12:24:28 +0000"  >&lt;p&gt;Hi Oleg,&lt;/p&gt;

&lt;p&gt;We built them ourselves. Here&apos;s the kernel-debug:&lt;br/&gt;
&lt;a href=&quot;http://vault.centos.org/6.3/updates/x86_64/Packages/kernel-debug-2.6.32-279.5.2.el6.x86_64.rpm&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://vault.centos.org/6.3/updates/x86_64/Packages/kernel-debug-2.6.32-279.5.2.el6.x86_64.rpm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here&apos;s the lustre-modules:&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/lustre-client-modules-1.8.8-wc1_2.6.32_279.5.2.el6.x86_64_gbc88c4c.x86_64.rpm&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/lustre-client-modules-1.8.8-wc1_2.6.32_279.5.2.el6.x86_64_gbc88c4c.x86_64.rpm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We haven&apos;t been able to find the lustre-modules debuginfo yet..is that also needed? If so, could we potentially try to rebuild identical modules in order to regenerate the debug info? &lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Kit &lt;/p&gt;</comment>
                            <comment id="54686" author="green" created="Fri, 22 Mar 2013 18:52:27 +0000"  >&lt;p&gt;Thanks.&lt;br/&gt;
So the clients run unmodified kernel and we can just grab the debuginfo for it from centos?&lt;/p&gt;

&lt;p&gt;As far as lustre debug info goes, we don&apos;t really strip debug symbols in 1.8 (and in 2.x prior to 2.4), so just lustre modules rpm is enough, debug module symbols are embedded in the modules themselves.&lt;/p&gt;</comment>
                            <comment id="54706" author="isaac" created="Fri, 22 Mar 2013 20:21:53 +0000"  >&lt;p&gt;I believe it&apos;s a bug in the generic LNet layer, i.e. under lnet/lnet. It has happened over different LNDs in the past, and shouldn&apos;t be LND specific. My patch in Bug 14238 wasn&apos;t a real fix - it just closed a couple of cases where corruption or dangling pointer could happen. The root cause wasn&apos;t found out, due to lack of debug information. But this time we have a good dump.&lt;/p&gt;</comment>
                            <comment id="54710" author="kitwestneat" created="Fri, 22 Mar 2013 20:28:03 +0000"  >&lt;p&gt;Oleg, yes, it is an unmodified kernel. &lt;/p&gt;

&lt;p&gt;FWIW, we are also seeing this at another customer site, but only their RHEL6 frontends are affected. In that case appears to be coincident with OOM errors, so maybe it&apos;s related to memory pressure?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;</comment>
                            <comment id="54847" author="kitwestneat" created="Tue, 26 Mar 2013 16:11:59 +0000"  >&lt;p&gt;I was wondering how the analysis was going. Is there anything we can do to help?&lt;/p&gt;</comment>
                            <comment id="54945" author="isaac" created="Wed, 27 Mar 2013 18:20:59 +0000"  >&lt;p&gt;Hi Kit,&lt;/p&gt;

&lt;p&gt;The kernel-debug-2.6.32-279.5.2.el6.x86_64.rpm you pointed to was a kernel with all sorts of run-time debugging options enabled. Usually it&apos;s not used in production system. Also, when I used the vmlinux from the corresponding debuginfo RPM, crash wouldn&apos;t even start. Then I tried to use the debuginfo RPM for normal (i.e. without debugging options) kernel of the same verion, crash would start with a warning on kernel version inconsistency between vmlinux and dumpfile. Can you please verify the client kernel version and point me to where the vmlinux is?&lt;/p&gt;

&lt;p&gt;With vmlinux from kernel-debuginfo-2.6.32-279.5.2.el6.x86_64.rpm, I got a panic that&apos;s different from the one reported. Instead of &quot;(lib-move.c:184:lnet_match_md()) ASSERTION(me == md-&amp;gt;md_me) failed&quot; in process kiblnd_sd_04, I got:&lt;br/&gt;
(events.c:418:ptlrpc_master_callback()) ASSERTION(callback == request_out_callback || callback == reply_in_callback || callback == client_bulk_callback || callback == request_in_callback || callback == reply_out_callback || callback == server_bulk_callback) failed&lt;br/&gt;
in kiblnd_sd_01.&lt;/p&gt;</comment>
                            <comment id="54956" author="kitwestneat" created="Wed, 27 Mar 2013 20:26:44 +0000"  >&lt;p&gt;Hi Isaac,&lt;/p&gt;

&lt;p&gt;It looks like the RHEL and Centos kernel RPMs are slightly different, and they were running the RHEL one. I am getting the RHEL debuginfo currently and will update the ticket when I get it uploaded. If you happen to have an RHN account, that should work too. &lt;/p&gt;</comment>
                            <comment id="54980" author="kitwestneat" created="Thu, 28 Mar 2013 03:12:49 +0000"  >&lt;p&gt;Here are the kerneldebug rpms:&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/kernel-debuginfo-2.6.32-279.19.1.el6.x86_64.rpm&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/kernel-debuginfo-2.6.32-279.19.1.el6.x86_64.rpm&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/kernel-debuginfo-common-x86_64-2.6.32-279.19.1.el6.x86_64.rpm&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/kernel-debuginfo-common-x86_64-2.6.32-279.19.1.el6.x86_64.rpm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Kit&lt;/p&gt;</comment>
                            <comment id="55048" author="isaac" created="Thu, 28 Mar 2013 19:34:43 +0000"  >&lt;p&gt;Strange, with the RHEL debuginfo, I got the same version warning and then crash failed with other errors. I&apos;ll continue to debug with the CentOS debuginfo which seemed to work despite the version warning. Also, it seemed that the original report was based on another crash dump, because both the DATE and UPTIME in the dump differed. It&apos;d give me more data if that dump is also available. Although this dump had a different failure, it&apos;s similar in many ways.&lt;/p&gt;</comment>
                            <comment id="55050" author="kitwestneat" created="Thu, 28 Mar 2013 20:48:20 +0000"  >&lt;p&gt;Oh doh, I uploaded the wrong debuginfo, 279.19.1 vs 279.5.2. I am uploading the correct one now and will send you the link.&lt;/p&gt;</comment>
                            <comment id="55052" author="kitwestneat" created="Thu, 28 Mar 2013 21:15:48 +0000"  >&lt;p&gt;correct kerneldebug rpms:&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/kernel-debuginfo-2.6.32-279.5.2.el6.x86_64.rpm&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/kernel-debuginfo-2.6.32-279.5.2.el6.x86_64.rpm&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;http://eu.ddn.com:8080/lustre/kernel-debuginfo-common-x86_64-2.6.32-279.5.2.el6.x86_64.rpm&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://eu.ddn.com:8080/lustre/kernel-debuginfo-common-x86_64-2.6.32-279.5.2.el6.x86_64.rpm&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="55061" author="isaac" created="Thu, 28 Mar 2013 22:55:33 +0000"  >&lt;p&gt;Thanks - this one worked perfectly, no warning at all.&lt;/p&gt;</comment>
                            <comment id="55121" author="isaac" created="Fri, 29 Mar 2013 19:28:02 +0000"  >&lt;p&gt;I&apos;ve been digging through the crash dump, and it appeared that the kernel slab allocation states got corrupted somehow.&lt;/p&gt;

&lt;p&gt;The assertion failure on callback pointer in ptlrpc_master_callback() was an indication of memory corruption because the &apos;callback&apos; pointer from the MD object was NEVER changed by the code after initialization. It looked almost impossible to be a result of racing code - no code changes that &apos;callback&apos; pointer at all. Use-after-free can also be ruled out because: the MD object is bigger than a page, and crash dump contained only the initial part of the object, because the rest of the object resided in a free page that was not included in the partial dump - if it were use-after-free, both pages should have been excluded from the dump. Moreover, the initial part of the MD object contained correct reference counter. So I was led to believe that the SLAB allocation states got screwed up and a same chunk of memory was returned to multiple callers, and thus the &apos;callback&apos; pointer got messed up by someone else unknowingly.&lt;/p&gt;

&lt;p&gt;In the past, we&apos;ve seen similar bugs where the root cause was never found out, and the solution was to disable some experimental Lustre feature that looked suspicious. In fact, the culprit might not be Lustre at all - any kernel code is technically capable of such screw-ups. My suggestion is to:&lt;br/&gt;
1. Kit, would it be possible to run the kernel-debug RPM (and corresponding Lustre modules which might need to be rebuilt) on a couple of clients where the error has been observed? The kernel-debug kernel should have enabled many SLAB and VM debugging options, and I think that&apos;d catch any SLAB/VM issues much earlier and would move us closer to the root cause.&lt;/p&gt;

&lt;p&gt;2. Doug, I think it&apos;d make sense to have some Lustre folk to double check the errors from file system level. I was glancing over the code, and some didn&apos;t look good, e.g.:&lt;br/&gt;
ll_file_join():&lt;br/&gt;
2696         tail = igrab(tail_filp-&amp;gt;f_dentry-&amp;gt;d_inode);&lt;/p&gt;

&lt;p&gt;igrab() can return NULL yet the code didn&apos;t check that. Maybe ll_file_join() wasn&apos;t even compiled, it was just something that raised my eyebrows.&lt;/p&gt;</comment>
                            <comment id="55185" author="kitwestneat" created="Mon, 1 Apr 2013 14:57:45 +0000"  >&lt;p&gt;Hi Isaac,&lt;/p&gt;

&lt;p&gt;Thanks for the analysis. I will try to get the debug kernel installed, but it might take a little bit of time. Would the full vmcore be of any use? Also this seems to appear fairly consistently with RHEL6 clients. Do you think it would be possible to look at differences in the kernel functions that the Lustre client calls? Perhaps there are too many, but if at all possible, it would be good to keep looking for the cause. &lt;/p&gt;</comment>
                            <comment id="55189" author="green" created="Mon, 1 Apr 2013 15:30:50 +0000"  >&lt;p&gt;Isaac: ll_file_join is (was) a known problematic area that was removed in later versions.&lt;br/&gt;
It&apos;s not used by anybody anyway.&lt;/p&gt;</comment>
                            <comment id="55199" author="isaac" created="Mon, 1 Apr 2013 17:04:16 +0000"  >&lt;p&gt;Hi Kit, a vmcore with free pages included would be a bit more helpful. Also it doesn&apos;t require kdump to go through data structures tracking free pages, so in case that those structures also get corrupted kdump would still be able to create the dump - kdump may hang if asked to exclude free pages but those data structures got corrupted. At this point, I think kernel-bug is the best way to go - we&apos;d be extremely lucky to be able to find something useful by going through that much code almost blindly, like trying to walk out of a dark rain forest with little guide. The callback pointer corruption was a good indication that somebody else stepped on our toes because our code doesn&apos;t change it at all and I&apos;ve already double checked all pointer arithmetic code in lnet to make sure that it was not ourselves that shot our own feet.&lt;/p&gt;

&lt;p&gt;Oleg, thanks the comment on file join. I saw errors like ...:&lt;br/&gt;
LustreError: 12905:0:(file.c:3331:ll_inode_revalidate_fini()) failure -2 inode 406847678&lt;/p&gt;

&lt;p&gt;... before the assertion happened. Does it ring any alarm to you?&lt;/p&gt;</comment>
                            <comment id="55209" author="green" created="Mon, 1 Apr 2013 18:08:12 +0000"  >&lt;p&gt;Isaac, this error is a somewhat usual race.&lt;br/&gt;
Wht it means is the client believes a particular name is valid, yet when it tries to get inode attributes, the file is already gone. Could happen frequently in rm vs find/ls workloads.&lt;/p&gt;</comment>
                            <comment id="55948" author="kitwestneat" created="Tue, 9 Apr 2013 22:58:36 +0000"  >&lt;p&gt;Isaac,&lt;/p&gt;

&lt;p&gt;I&apos;ve gotten a lot more stack traces from 1.8.9 clients. Some of them are only in the IB functions, not Lustre related, which I find interesting. Are you aware of any memory corruption bugs in the RHEL6 line of RDMA modules? I spent some time looking, but couldn&apos;t find anything definitive. There is:&lt;/p&gt;

&lt;p&gt;BZ#873949&lt;br/&gt;
Previously, the IP over Infiniband (IPoIB) driver maintained state information about neighbors on the network by attaching it to the core network&apos;s neighbor structure. However, due to a race condition between the freeing of the core network neighbor struct and the freeing of the IPoIB network struct, a use after free condition could happen, resulting in either a kernel oops or 4 or 8 bytes of kernel memory being zeroed when it was not supposed to be. These patches decouple the IPoIB neighbor struct from the core networking stack&apos;s neighbor struct so that there is no race between the freeing of one and the freeing of the other.&lt;/p&gt;

&lt;p&gt;There is also a crash in Lustre code that doesn&apos;t seem to be related to IB:&lt;br/&gt;
IP: &lt;span class=&quot;error&quot;&gt;&amp;#91;&amp;lt;ffffffffa0761569&amp;gt;&amp;#93;&lt;/span&gt; lov_change_cbdata+0xd9/0x780 &lt;span class=&quot;error&quot;&gt;&amp;#91;lov&amp;#93;&lt;/span&gt;&lt;br/&gt;
which translates to this line:&lt;br/&gt;
2247                 rc = obd_change_cbdata(lov-&amp;gt;lov_tgts&lt;span class=&quot;error&quot;&gt;&amp;#91;loi-&amp;gt;loi_ost_idx&amp;#93;&lt;/span&gt;-&amp;gt;ltd_exp,&lt;br/&gt;
2248                                        &amp;amp;submd, it, data);&lt;/p&gt;

&lt;p&gt;So it&apos;s very confusing. Here are all the stack traces if you would like to take a look at them:&lt;br/&gt;
&lt;a href=&quot;ftp://shell.sgi.com/collect/NOAADDN&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;ftp://shell.sgi.com/collect/NOAADDN&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our next step I think is to put the debug kernel on some of the clients and see what happens. It&apos;s been impossible to find any pattern to what triggers the crashes, so it will be only by random chance that we reproduce it. Let me know if you see anything in the stack traces, or if you would like to see a vmcore. &lt;/p&gt;

&lt;p&gt;Do you know if there are already any prebuilt client RPMs against the kernel-debug RPM that we could use?&lt;/p&gt;

&lt;p&gt;Thanks.&lt;/p&gt;</comment>
                            <comment id="56052" author="isaac" created="Wed, 10 Apr 2013 21:49:58 +0000"  >&lt;p&gt;Hi, I&apos;ve looked at the crashes and they all appeared like memory corruptions: bad pointer dereferences, inconsistent data structures caught by assertions, and even a BUG in mm/slab.c. They seemed to have been triggered by memory pressures and increased chances of racing through the 24 CPUs.&lt;/p&gt;

&lt;p&gt;The crashes in IB had nothing to do with Lustre - the o2iblnd never uses the IB functions involved in the crashes. In the past, there had a been a few memory corruption issues fixed in OFED like the one you pointed out. But so far no clue has pointed at OFED yet - anything in the kernel space could have been the culprit. I still believe that kernel-debug would shed some light and move us closer to the root cause. Unfortunately we don&apos;t build RPMs for kernel-debug.&lt;/p&gt;

&lt;p&gt;Another option would be to upgrade some clients to RHEL 6.4 which included the fix of the IB memory corruption you mentioned, but that&apos;s likely more work than trying kernel-debug of 6.3.&lt;/p&gt;</comment>
                            <comment id="70881" author="adilger" created="Wed, 6 Nov 2013 17:47:00 +0000"  >&lt;p&gt;Per Isaac&apos;s comments, this appears to be some form of memory corruption, possibly related to the IB code in the RHEL kernel.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="15467">LU-1734</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvlxb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7331</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>