<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:18:05 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1603] NULL-pointer dereference in ldiskfs_statfs()</title>
                <link>https://jira.whamcloud.com/browse/LU-1603</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Using OFD on master, at the end of runtests, the OSS crashed:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
Lustre: DEBUG MARKER: -----============= acceptance-small: runtests ============----- Fri Jun 29 11:48:41 PDT 2012Lustre: DEBUG MARKER: Using TIMEOUT=20LNet: 6743:0:(debug.c:324:libcfs_debug_str2mask()) You are trying to use a numerical value for the mask - this will be deprecated in a future release.LNet: 6743:0:(debug.c:324:libcfs_debug_str2mask()) Skipped 7 previous similar messages
Lustre: DEBUG MARKER: touching /mnt/lustre at Fri Jun 29 11:48:47 PDT 2012
Lustre: DEBUG MARKER: create an empty file /mnt/lustre/hosts.6243
Lustre: DEBUG MARKER: copying /etc/hosts to /mnt/lustre/hosts.6243
Lustre: DEBUG MARKER: comparing /etc/hosts and /mnt/lustre/hosts.6243
Lustre: DEBUG MARKER: renaming /mnt/lustre/hosts.6243 to /mnt/lustre/hosts.6243.ren
Lustre: DEBUG MARKER: copying /etc/hosts to /mnt/lustre/hosts.6243 again
Lustre: DEBUG MARKER: truncating /mnt/lustre/hosts.6243
Lustre: DEBUG MARKER: removing /mnt/lustre/hosts.6243
Lustre: DEBUG MARKER: truncating /mnt/lustre/hosts.6243.2 to 123 bytes
Lustre: DEBUG MARKER: creating /mnt/lustre/runtest.6243
Lustre: DEBUG MARKER: copying files from /etc /bin to /mnt/lustre/runtest.6243/etc /bin at Fri Jun 29 11:48:57 PDT 2012
Lustre: DEBUG MARKER: comparing newly copied files at Fri Jun 29 11:49:02 PDT 2012
Lustre: DEBUG MARKER: finished at Fri Jun 29 11:49:03 PDT 2012 (16)
Lustre: 2982:0:(client.c:1870:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1340995754/real 1340995754]  req@ffff88007a8b08
00 x1406135840014422/t0(0) o400-&amp;gt;MGC10.10.4.126@tcp@10.10.4.126@tcp:26/25 lens 224/224 e 0 to 1 dl 1340995761 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
LustreError: 166-1: MGC10.10.4.126@tcp: Connection to MGS (at 10.10.4.126@tcp) was lost; in progress operations using this service will fail
Lustre: server umount lustre-OST0000 complete
Lustre: 2980:0:(client.c:1870:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1340995761/real 1340995761]  req@ffff88007a8b08
00 x1406135840014424/t0(0) o250-&amp;gt;MGC10.10.4.126@tcp@10.10.4.126@tcp:26/25 lens 400/544 e 0 to 1 dl 1340995767 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-OST0001 complete
Lustre: server umount lustre-OST0002 complete
Lustre: 2980:0:(client.c:1870:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1340995771/real 1340995771]  req@ffff88007a8b08
00 x1406135840014425/t0(0) o250-&amp;gt;MGC10.10.4.126@tcp@10.10.4.126@tcp:26/25 lens 400/544 e 0 to 1 dl 1340995782 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-OST0003 complete
Lustre: lustre-OST0004 is waiting for obd_unlinked_exports more than 8 seconds. The obd refcount = 3. Is it stuck?
Lustre: server umount lustre-OST0004 complete
Lustre: 2980:0:(client.c:1870:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1340995786/real 1340995786]  req@ffff880076acf4
00 x1406135840014426/t0(0) o250-&amp;gt;MGC10.10.4.126@tcp@10.10.4.126@tcp:26/25 lens 400/544 e 0 to 1 dl 1340995802 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-OST0005 complete
Lustre: 2980:0:(client.c:1870:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1340995806/real 1340995806]  req@ffff88007a7ba0
00 x1406135840014427/t0(0) o250-&amp;gt;MGC10.10.4.126@tcp@10.10.4.126@tcp:26/25 lens 400/544 e 0 to 1 dl 1340995827 ref 1 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-OST0006 complete
LNet: 7921:0:(debug.c:324:libcfs_debug_str2mask()) You are trying to use a numerical value for the mask - this will be deprecated in a future release.
LNet: 7921:0:(debug.c:324:libcfs_debug_str2mask()) Skipped 1 previous similar message
LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: 
LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: 
Lustre: MGC10.10.4.126@tcp: Reactivating import
BUG: unable to handle kernel NULL pointer dereference at 0000000000000550
IP: [&amp;lt;ffffffffa0b3751e&amp;gt;] ldiskfs_statfs+0x3e/0x1d0 [ldiskfs]
PGD 76a17067 PUD 37b55067 PMD 0 
Oops: 0002 [#1] SMP 
last sysfs file: /sys/devices/pci0000:00/0000:00:05.0/virtio1/block/vda/queue/max_sectors_kb
CPU 0 
Modules linked in: nfs fscache lustre(U) ofd(U) ost(U) osd_ldiskfs(U) cmm(U) fsfilt_ldiskfs(U) ldiskfs(U) mdt(U) mdd(U) mds(U) mgs(U) jbd2 mgc(U) lquota(U) lov(U) osc(U) mdc(U) lmv(U) fid(U) fld(U) ptlrpc(U) obdclass(U) lvfs(U) ksocklnd(U) lnet(U) libcfs(U) nfsd lockd nfs_acl auth_rpcgss exportfs autofs4 sunrpc ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_addr ipv6 ib_sa ib_mad ib_core microcode virtio_balloon 8139too 8139cp mii i2c_piix4 i2c_core ext3 jbd mbcache virtio_blk virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]

Pid: 8183, comm: llog_process_th Not tainted 2.6.32-220.17.1.el6_lustre.ge531dc4.x86_64 #1 Red Hat KVM
RIP: 0010:[&amp;lt;ffffffffa0b3751e&amp;gt;]  [&amp;lt;ffffffffa0b3751e&amp;gt;] ldiskfs_statfs+0x3e/0x1d0 [ldiskfs]
RSP: 0018:ffff8800796cdac0  EFLAGS: 00010206
RAX: 00000000001b9c00 RBX: 0000000000000038 RCX: 0000000000000080
RDX: 00000000001b9c00 RSI: 0000000000000037 RDI: ffff880070523000
RBP: ffff8800796cdb00 R08: 0000000000000000 R09: 00000000000041ed
R10: 0000000000000000 R11: 0000000000000000 R12: ffff880064ed3400
R13: 0000000000000550 R14: ffff880070523000 R15: ffff88007aeea800
FS:  00007efff9f78700(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000550 CR3: 0000000076acf000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process llog_process_th (pid: 8183, threadinfo ffff8800796cc000, task ffff88007cd17540)
Stack:
 0000000000000038 0000000000000012 00000000c0ffeeaa ffff88007a5f6000
&amp;lt;0&amp;gt; ffff8800796cdc20 0000000000000550 ffff88007a5f60b0 ffff8800772de958
&amp;lt;0&amp;gt; ffff8800796cdb50 ffffffffa0bc6fd4 ffff880070523000 ffff88007a5f6140
Call Trace:
 [&amp;lt;ffffffffa0bc6fd4&amp;gt;] osd_statfs+0xf4/0x360 [osd_ldiskfs]
 [&amp;lt;ffffffffa0c3a110&amp;gt;] ofd_statfs_internal+0xb0/0x2c0 [ofd]
 [&amp;lt;ffffffffa0c39017&amp;gt;] ofd_device_alloc+0x897/0x16a0 [ofd]
 [&amp;lt;ffffffffa052a347&amp;gt;] obd_setup+0x1d7/0x2f0 [obdclass]
 [&amp;lt;ffffffffa05152bb&amp;gt;] ? class_new_export+0x72b/0x960 [obdclass]
 [&amp;lt;ffffffffa052a668&amp;gt;] class_setup+0x208/0x890 [obdclass]
 [&amp;lt;ffffffffa05317cc&amp;gt;] class_process_config+0xbec/0x1c20 [obdclass]
 [&amp;lt;ffffffffa0398be0&amp;gt;] ? cfs_alloc+0x30/0x60 [libcfs]
 [&amp;lt;ffffffffa052bfd3&amp;gt;] ? lustre_cfg_new+0x353/0x7e0 [obdclass]
 [&amp;lt;ffffffffa05338ab&amp;gt;] class_config_llog_handler+0x9bb/0x1610 [obdclass]
 [&amp;lt;ffffffffa05025c0&amp;gt;] ? llog_lvfs_next_block+0x2d0/0x650 [obdclass]
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffffa04fd1c8&amp;gt;] llog_process_thread+0x888/0xd00 [obdclass]
 [&amp;lt;ffffffff814f44ec&amp;gt;] ? kprobe_flush_task+0xbc/0xe0
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
Code: 44 00 00 4c 8b b7 90 00 00 00 49 89 f5 4d 8b be 90 02 00 00 41 80 7f 70 00 4d 8b 67 60 0f 89 fa 00 00 00 49 c7 47 40 00 00 00 00 &amp;lt;49&amp;gt; c7 45 00 53 ef 00 00
 49 8b 46 18 49 8d bf c8 00 00 00 49 89 
RIP  [&amp;lt;ffffffffa0b3751e&amp;gt;] ldiskfs_statfs+0x3e/0x1d0 [ldiskfs]
 RSP &amp;lt;ffff8800796cdac0&amp;gt;
CR2: 0000000000000550
---[ end trace 7e181821e1d31a6b ]---
Kernel panic - not syncing: Fatal exceptionPid: 8183, comm: llog_process_th Tainted: G      D    ----------------   2.6.32-220.17.1.el6_lustre.ge531dc4.x86_64 #1
Call Trace:
 [&amp;lt;ffffffff814eccea&amp;gt;] ? panic+0x78/0x143
 [&amp;lt;ffffffff814f0e84&amp;gt;] ? oops_end+0xe4/0x100
 [&amp;lt;ffffffff810423fb&amp;gt;] ? no_context+0xfb/0x260
 [&amp;lt;ffffffff81042685&amp;gt;] ? __bad_area_nosemaphore+0x125/0x1e0
 [&amp;lt;ffffffff81042753&amp;gt;] ? bad_area_nosemaphore+0x13/0x20 [&amp;lt;ffffffff81042e0d&amp;gt;] ? __do_page_fault+0x31d/0x480 [&amp;lt;ffffffffa0165271&amp;gt;] ? simple_mkdir+0xb1/0x4e0 [lvfs] [&amp;lt;ffffffff81096a8f&amp;gt;] ? up+0x2f/0x50 [&amp;lt;ffffffffa0be0bb8&amp;gt;] ? osd_compat_seq_init+0x2d8/0x650 [osd_ldiskfs]
 [&amp;lt;ffffffffa01639c3&amp;gt;] ? pop_ctxt+0xf3/0x2f0 [lvfs] [&amp;lt;ffffffff814f2e3e&amp;gt;] ? do_page_fault+0x3e/0xa0 [&amp;lt;ffffffff814f01f5&amp;gt;] ? page_fault+0x25/0x30 [&amp;lt;ffffffffa0b3751e&amp;gt;] ? ldiskfs_statfs+0x3e/0x1d0 [ldiskfs] [&amp;lt;ffffffffa0bc6fd4&amp;gt;] ? osd_statfs+0xf4/0x360 [osd_ldiskfs] [&amp;lt;ffffffffa0c3a110&amp;gt;] ? ofd_statfs_internal+0xb0/0x2c0 [ofd]
 [&amp;lt;ffffffffa0c39017&amp;gt;] ? ofd_device_alloc+0x897/0x16a0 [ofd]
 [&amp;lt;ffffffffa052a347&amp;gt;] ? obd_setup+0x1d7/0x2f0 [obdclass] [&amp;lt;ffffffffa05152bb&amp;gt;] ? class_new_export+0x72b/0x960 [obdclass]
 [&amp;lt;ffffffffa052a668&amp;gt;] ? class_setup+0x208/0x890 [obdclass] [&amp;lt;ffffffffa05317cc&amp;gt;] ? class_process_config+0xbec/0x1c20 [obdclass]
 [&amp;lt;ffffffffa0398be0&amp;gt;] ? cfs_alloc+0x30/0x60 [libcfs]
 [&amp;lt;ffffffffa052bfd3&amp;gt;] ? lustre_cfg_new+0x353/0x7e0 [obdclass]
 [&amp;lt;ffffffffa05338ab&amp;gt;] ? class_config_llog_handler+0x9bb/0x1610 [obdclass]
 [&amp;lt;ffffffffa05025c0&amp;gt;] ? llog_lvfs_next_block+0x2d0/0x650 [obdclass]
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffffa04fd1c8&amp;gt;] ? llog_process_thread+0x888/0xd00 [obdclass]
 [&amp;lt;ffffffff814f44ec&amp;gt;] ? kprobe_flush_task+0xbc/0xe0
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c14a&amp;gt;] ? child_rip+0xa/0x20
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffffa04fc940&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This can be reproduced predictably by running &quot;USE_OFD=yes LOAD_MODULES_REMOTE=true sh llmount.sh&quot; twice on a four-node Toro cluster.  (I couldn&apos;t trigger it easily on single node setup.)  The osd_thread_info in this context was NULL.  Dumping lu_keys[] in ofd_stack_init() before lu_env_refill() showed:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
LustreError: [0]: ffffffffa0543ba0 4b (ffffffffa04f0600,ffffffffa04ef480,(null)) 0 47 &quot;obdclass&quot;@ffffffffa054fce0
LustreError: [1]: ffffffffa0546c60 b (ffffffffa04f65b0,ffffffffa04f64b0,ffffffffa04f6430) 1 47 &quot;obdclass&quot;@ffffffffa054fce0
LustreError: [2]: ffffffffa0545900 c3 (ffffffffa04f3800,ffffffffa04f3700,(null)) 2 40 &quot;obdclass&quot;@ffffffffa054fce0
LustreError: [3]: ffffffffa054fa20 1 (ffffffffa0510100,ffffffffa0510000,(null)) 3 7 &quot;obdclass&quot;@ffffffffa054fce0LustreError: [4]: ffffffffa0546f20 8 (ffffffffa04f8950,ffffffffa04f7280,ffffffffa04f6e70) 4 12 &quot;obdclass&quot;@ffffffffa054fce0
LustreError: [5]: ffffffffa0713020 3 (ffffffffa06aa220,ffffffffa06a4f70,(null)) 5 40 &quot;ptlrpc&quot;@ffffffffa0714f60
LustreError: [6]: ffffffffa01e61c0 3 (ffffffffa01dc220,ffffffffa01dc0a0,(null)) 6 40 &quot;fld&quot;@ffffffffa01e8b80
LustreError: [7]: ffffffffa026b1e0 1 (ffffffffa0264280,ffffffffa02630a0,(null)) 7 7 &quot;fid&quot;@ffffffffa026dde0
LustreError: [8]: ffffffffa0840380 40000008 (ffffffffa0811830,ffffffffa08110c0,(null)) 8 2 &quot;osc&quot;@ffffffffa084a240
LustreError: [9]: ffffffffa08403c0 40000010 (ffffffffa08111d0,ffffffffa0810fb0,(null)) 9 1 &quot;osc&quot;@ffffffffa084a240
LustreError: [10]: ffffffffa08cd0c0 40000008 (ffffffffa089dc40,ffffffffa089c180,(null)) 10 2 &quot;lov&quot;@ffffffffa08d50a0
LustreError: [11]: ffffffffa08cd100 40000010 (ffffffffa089de60,ffffffffa089c070,(null)) 11 1 &quot;lov&quot;@ffffffffa08d50a0
LustreError: [12]: ffffffffa0a0bd40 40000001 (ffffffffa09f1ec0,ffffffffa09ee540,(null)) 12 2 &quot;mdd&quot;@ffffffffa0a0f940
LustreError: [13]: ffffffffa0a0cd00 40000010 (ffffffffa09f19e0,ffffffffa09ee3d0,(null)) 13 1 &quot;mdd&quot;@ffffffffa0a0f940
LustreError: [14]: ffffffffa0a0bcc0 40000010 (ffffffffa09f1b80,ffffffffa09ee200,(null)) 14 1 &quot;mdd&quot;@ffffffffa0a0f940
LustreError: [15]: ffffffffa0a0bd00 40000010 (ffffffffa09f1d20,ffffffffa09ee100,(null)) 15 1 &quot;mdd&quot;@ffffffffa0a0f940
LustreError: [16]: ffffffffa0a8f140 40000001 (ffffffffa0a4efd0,ffffffffa0a457f0,(null)) 16 2 &quot;mdt&quot;@ffffffffa0aa5920
LustreError: [17]: ffffffffa0b03c00 40000009 (ffffffffa0af7120,ffffffffa0af6e40,(null)) 17 2 &quot;cmm&quot;@ffffffffa0b06240
LustreError: [18]: ffffffffa0b00160 40000001 (ffffffffa0aee950,ffffffffa0aec030,(null)) 18 2 &quot;cmm&quot;@ffffffffa0b06240
LustreError: [19]: ffffffffa0b3e440 400000c3 (ffffffffa0b18f20,ffffffffa0b15d00,ffffffffa0b15500) 19 2 &quot;osd_ldiskfs&quot;@ffffffffa0b4b540
LustreError: [20]: ffffffffa0bac8a0 2 (ffffffffa0b87dc0,ffffffffa0b87080,ffffffffa0b87020) 20 3 &quot;ofd&quot;@ffffffffa0bb9ae0
LustreError: [21]: ffffffffa0c67000 40000008 (ffffffffa0c2fb70,ffffffffa0c2ced0,(null)) 21 2 &quot;lustre&quot;@ffffffffa0c6b420
LustreError: [22]: ffffffffa0c67040 40000010 (ffffffffa0c2fd70,ffffffffa0c2cdc0,(null)) 22 1 &quot;lustre&quot;@ffffffffa0c6b420
LustreError: [23]: ffffffffa0c68c00 40000008 (ffffffffa0c327a0,ffffffffa0c32390,(null)) 23 2 &quot;lustre&quot;@ffffffffa0c6b420
LustreError: [24]: ffffffffa0c68c40 40000010 (ffffffffa0c329a0,ffffffffa0c321e0,(null)) 24 1 &quot;lustre&quot;@ffffffffa0c6b420
BUG: unable to handle kernel NULL pointer dereference at 0000000000000550
[...]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Notice that osd-ldiskfs&apos;s key was LCT_QUIESCENT, which was a result of stopping all osd-ldiskfs devices.  The lu_env_refill() should be done after osd-ldiskfs device type has been &quot;started&quot;, during which the key in question will be &quot;revived&quot;.  Patch to follow.&lt;/p&gt;</description>
                <environment></environment>
        <key id="15140">LU-1603</key>
            <summary>NULL-pointer dereference in ldiskfs_statfs()</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="6" iconUrl="https://jira.whamcloud.com/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="liwei">Li Wei</assignee>
                                    <reporter username="liwei">Li Wei</reporter>
                        <labels>
                    </labels>
                <created>Thu, 5 Jul 2012 11:14:41 +0000</created>
                <updated>Wed, 11 Jul 2012 21:08:33 +0000</updated>
                            <resolved>Wed, 11 Jul 2012 21:08:21 +0000</resolved>
                                    <version>Lustre 2.3.0</version>
                                    <fixVersion>Lustre 2.3.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>1</watches>
                                                                            <comments>
                            <comment id="41585" author="liwei" created="Sun, 8 Jul 2012 22:43:12 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/3353&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/3353&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="41724" author="liwei" created="Wed, 11 Jul 2012 21:08:21 +0000"  >&lt;p&gt;The patch has landed to master.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv69z:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4551</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>