<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:32:21 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3261] Threads stuck in osc_enter_cache when testing LU-1669</title>
                <link>https://jira.whamcloud.com/browse/LU-3261</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I&apos;m running a slightly modified version of master trying to test &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1669&quot; title=&quot;lli-&amp;gt;lli_write_mutex (single shared file performance)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1669&quot;&gt;&lt;del&gt;LU-1669&lt;/del&gt;&lt;/a&gt;. During my testing I&apos;ve run into a situation which looks to have the exact same symptoms of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2576&quot; title=&quot;Hangs in osc_enter_cache due to dirty pages not being flushed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2576&quot;&gt;&lt;del&gt;LU-2576&lt;/del&gt;&lt;/a&gt;. While running a single shared file run of FIO using 16 tasks, they&apos;ve all gotten hung under osc_enter_cache. The backtraces for these stuck threads look like so:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 44453  TASK: ffff88043544f500  CPU: 13  COMMAND: &quot;fio&quot;
 #0 [ffff88022a1634d8] schedule at ffffffff8150dab2
 #1 [ffff88022a1635a0] cfs_waitq_wait+0xe at ffffffffa09f26fe [libcfs]
 #2 [ffff88022a1635b0] osc_enter_cache+0x85a at ffffffffa0efcd9a [osc]
 #3 [ffff88022a1636d0] osc_queue_async_io+0x1044 at ffffffffa0f04e54 [osc]
 #4 [ffff88022a163860] osc_page_cache_add+0xc9 at ffffffffa0ee97d9 [osc]
 #5 [ffff88022a1638a0] cl_page_cache_add+0x7f at ffffffffa0acf2bf [obdclass]
 #6 [ffff88022a163940] lov_page_cache_add+0x85 at ffffffffa0f7a305 [lov]
 #7 [ffff88022a163970] cl_page_cache_add+0x7f at ffffffffa0acf2bf [obdclass]
 #8 [ffff88022a163a10] vvp_io_commit_write+0x3e5 at ffffffffa074b335 [lustre]
 #9 [ffff88022a163a90] cl_io_commit_write+0xad at ffffffffa0ade6cd [obdclass]
#10 [ffff88022a163ae0] ll_commit_write+0xee at ffffffffa072046e [lustre]
#11 [ffff88022a163b40] ll_write_end+0x30 at ffffffffa0738760 [lustre]
#12 [ffff88022a163b70] generic_file_buffered_write+0x18a at ffffffff8111a21a
#13 [ffff88022a163c40] __generic_file_aio_write+0x260 at ffffffff8111bc20
#14 [ffff88022a163d00] vvp_io_write_start+0xcf at ffffffffa074d4ef [lustre]
#15 [ffff88022a163d50] cl_io_start+0x6a at ffffffffa0adb13a [obdclass]
#16 [ffff88022a163d80] cl_io_loop+0xb4 at ffffffffa0adf874 [obdclass]
#17 [ffff88022a163db0] ll_file_io_generic+0x29b at ffffffffa06f4f2b [lustre]
#18 [ffff88022a163e20] ll_file_aio_write+0x142 at ffffffffa06f5f92 [lustre]
#19 [ffff88022a163e80] ll_file_write+0x16c at ffffffffa06f627c [lustre]
#20 [ffff88022a163ef0] vfs_write+0xb8 at ffffffff81180b38
#21 [ffff88022a163f30] sys_write+0x51 at ffffffff81181431
#22 [ffff88022a163f80] system_call_fastpath+0x16 at ffffffff8100b072
    RIP: 00002aaaaaf5f4ed  RSP: 00007fffffff9ff0  RFLAGS: 00000206
    RAX: 0000000000000001  RBX: ffffffff8100b072  RCX: 000000000000012b
    RDX: 0000000000100000  RSI: 00002aaab73fe010  RDI: 0000000000000000
    RBP: 00002aaab4c5f000   R8: 0000000000000000   R9: 00007fffffffa098
    R10: 0000000000000000  R11: 0000000000000293  R12: 00002aaab4c63be0
    R13: 00000000006f9248  R14: 00000000006f9190  R15: 0000000000100000
    ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As a result I haven&apos;t seen any IO resulting from the FIO job in the past couple hours. Looking at the dirty stats on the system, I have:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# hype202 /root &amp;gt; grep . /proc/fs/lustre/osc/*/*dirty*
/proc/fs/lustre/osc/lcz-OST0000-osc-ffff880839905800/cur_dirty_bytes:0
/proc/fs/lustre/osc/lcz-OST0000-osc-ffff880839905800/max_dirty_mb:32
/proc/fs/lustre/osc/lcz-OST0001-osc-ffff880839905800/cur_dirty_bytes:33554432
/proc/fs/lustre/osc/lcz-OST0001-osc-ffff880839905800/max_dirty_mb:32
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# hype202 /root &amp;gt; grep . /proc/sys/vm/*dirty*
/proc/sys/vm/dirty_background_bytes:0
/proc/sys/vm/dirty_background_ratio:10
/proc/sys/vm/dirty_bytes:0
/proc/sys/vm/dirty_expire_centisecs:3000
/proc/sys/vm/dirty_ratio:20
/proc/sys/vm/dirty_writeback_centisecs:500
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The exact line number it&apos;s stuck at is here:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;(gdb) l *osc_enter_cache+0x85a
0x2edca is in osc_enter_cache (/var/dumps/surya1/dev/lustre/lustre/osc/osc_cache.c:1541).
1536                    osc_io_unplug_async(env, cli, NULL);
1537
1538                    CDEBUG(D_CACHE, &quot;%s: sleeping for cache space @ %p for %p\n&quot;,
1539                           cli-&amp;gt;cl_import-&amp;gt;imp_obd-&amp;gt;obd_name, &amp;amp;ocw, oap);
1540
1541                    rc = l_wait_event(ocw.ocw_waitq, ocw_granted(cli, &amp;amp;ocw), &amp;amp;lwi);
1542
1543                    client_obd_list_lock(&amp;amp;cli-&amp;gt;cl_loi_list_lock);
1544
1545                    /* l_wait_event is interrupted by signal */
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the interest of full disclosure, the client I&apos;m running is this commit from master (i.e. not a &quot;chaos&quot; derivative):&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;commit 77aa3f2e38e94d1ac83afa2f7ef18a1ca9de92af
Author: Nathaniel Clark &amp;lt;nathaniel.l.clark@intel.com&amp;gt;
Date:   Wed Apr 24 10:10:25 2013 -0400

    LU-2831 tests: Sync MDS data also when doing sync_all_data
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Plus this change:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;diff --git i/lustre/llite/file.c w/lustre/llite/file.c
index e61b696..68fe7b9 100644
--- i/lustre/llite/file.c
+++ w/lustre/llite/file.c
@@ -868,7 +868,6 @@ restart:
         if (cl_io_rw_init(env, io, iot, *ppos, count) == 0) {
                 struct vvp_io *vio = vvp_env_io(env);
                 struct ccc_io *cio = ccc_env_io(env);
-                int write_mutex_locked = 0;
 
                 cio-&amp;gt;cui_fd  = LUSTRE_FPRIVATE(file);
                 vio-&amp;gt;cui_io_subtype = args-&amp;gt;via_io_subtype;
@@ -883,10 +882,7 @@ restart:
 #endif
                         if ((iot == CIT_WRITE) &amp;amp;&amp;amp;
                             !(cio-&amp;gt;cui_fd-&amp;gt;fd_flags &amp;amp; LL_FILE_GROUP_LOCKED)) {
-                               if (mutex_lock_interruptible(&amp;amp;lli-&amp;gt;
-                                                               lli_write_mutex))
-                                        GOTO(out, result = -ERESTARTSYS);
-                                write_mutex_locked = 1;
+                               ;
                         } else if (iot == CIT_READ) {
                                down_read(&amp;amp;lli-&amp;gt;lli_trunc_sem);
                         }
@@ -904,9 +900,7 @@ restart:
                         LBUG();
                 }
                 result = cl_io_loop(env, io);
-                if (write_mutex_locked)
-                       mutex_unlock(&amp;amp;lli-&amp;gt;lli_write_mutex);
-                else if (args-&amp;gt;via_io_subtype == IO_NORMAL &amp;amp;&amp;amp; iot == CIT_READ)
+                if (args-&amp;gt;via_io_subtype == IO_NORMAL &amp;amp;&amp;amp; iot == CIT_READ)
                        up_read(&amp;amp;lli-&amp;gt;lli_trunc_sem);
         } else {
                 /* cl_io_rw_init() handled IO */
diff --git i/lustre/llite/vvp_io.c w/lustre/llite/vvp_io.c
index db53842..746c161 100644
--- i/lustre/llite/vvp_io.c
+++ w/lustre/llite/vvp_io.c
@@ -459,8 +459,8 @@ static ssize_t lustre_generic_file_read(struct file *file,
 static ssize_t lustre_generic_file_write(struct file *file,
                                         struct ccc_io *vio, loff_t *ppos)
 {
-        return generic_file_aio_write(vio-&amp;gt;cui_iocb, vio-&amp;gt;cui_iov,
-                                      vio-&amp;gt;cui_nrsegs, *ppos);
+        return __generic_file_aio_write(vio-&amp;gt;cui_iocb, vio-&amp;gt;cui_iov,
+                                      vio-&amp;gt;cui_nrsegs, &amp;amp;vio-&amp;gt;cui_iocb-&amp;gt;ki_pos);
 }
 #endif
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Since I &lt;em&gt;am&lt;/em&gt; running a slightly modified version of master, I decided to make a completely new issue rather than reopening &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2576&quot; title=&quot;Hangs in osc_enter_cache due to dirty pages not being flushed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2576&quot;&gt;&lt;del&gt;LU-2576&lt;/del&gt;&lt;/a&gt;. But I&apos;m skeptical that this local modification would have caused the hang. I&apos;m inclined to think this is a regression in master (but would be more than happy to be proven wrong).&lt;/p&gt;

&lt;p&gt;Also, the test I am running is this:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;$ cat tmp.fio
; ssf.fio, single shared file
[global]
directory=/p/lcraterz/surya1/LU-1669/fio
filename=ssf
size=4g
offset_increment=4g
blocksize=1m
direct=0
ioengine=sync
numjobs=16
group_reporting=1
fallocate=none

[write]
rw=write
fsync_on_close=1

$ fio tmp.fio
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="18673">LU-3261</key>
            <summary>Threads stuck in osc_enter_cache when testing LU-1669</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="prakash">Prakash Surya</reporter>
                        <labels>
                            <label>llnl</label>
                    </labels>
                <created>Thu, 2 May 2013 00:08:12 +0000</created>
                <updated>Thu, 26 Feb 2015 22:01:14 +0000</updated>
                            <resolved>Fri, 14 Nov 2014 18:44:17 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.7.0</fixVersion>
                                        <due></due>
                            <votes>1</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="57485" author="prakash" created="Thu, 2 May 2013 00:17:22 +0000"  >&lt;p&gt;Hmm, &lt;tt&gt;echo 3 &amp;gt; /proc/sys/vm/drop_caches&lt;/tt&gt; doesn&apos;t clear it up like it did with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2576&quot; title=&quot;Hangs in osc_enter_cache due to dirty pages not being flushed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2576&quot;&gt;&lt;del&gt;LU-2576&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="57489" author="jay" created="Thu, 2 May 2013 04:17:31 +0000"  >&lt;p&gt;Niu should take a look at this bug.&lt;/p&gt;</comment>
                            <comment id="57492" author="niu" created="Thu, 2 May 2013 06:34:52 +0000"  >&lt;p&gt;Prakash, &quot;echo 3 &amp;gt; /proc/sys/vm/drop_caches&quot; didn&apos;t trigger write RPCs at all? Do you see anything abnormal in the log? Thanks.&lt;/p&gt;</comment>
                            <comment id="57540" author="prakash" created="Thu, 2 May 2013 16:47:17 +0000"  >&lt;p&gt;No, drop_caches did &lt;b&gt;not&lt;/b&gt; trigger the write RPCs as far as I can tell. I don&apos;t see anything of interest in the lustre logs or console either. The only messages I see occurring now in the lustre log are:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;00020000:00200000:9.0F:1367513100.155021:0:47294:0:(lov_io.c:415:lov_io_iter_init()) shrink: 0 [34359742464, 18446744073709551615)
00020000:00200000:9.0:1367513100.155028:0:47294:0:(lov_io.c:415:lov_io_iter_init()) shrink: 1 [34359738368, 18446744073709551615)
00000008:00000020:9.0:1367513100.155030:0:47294:0:(osc_cache.c:2913:osc_cache_writeback_range()) obj ffff88041fb0ade8 ready 1|-|+ wr 8192|-|- rd 0|- cache page out.
00000008:00000020:9.0:1367513100.155032:0:47294:0:(osc_cache.c:2913:osc_cache_writeback_range()) obj ffff88041fb0acc8 ready 0|-|- wr 0|-|- rd 0|- cache page out.
00000080:00200000:9.0:1367513100.155042:0:47294:0:(vvp_io.c:128:vvp_io_fini()) ignore/verify layout 1/0, layout version 0.
00020000:00200000:9.0:1367513105.155028:0:47294:0:(lov_io.c:415:lov_io_iter_init()) shrink: 0 [34359742464, 18446744073709551615)
00020000:00200000:9.0:1367513105.155035:0:47294:0:(lov_io.c:415:lov_io_iter_init()) shrink: 1 [34359738368, 18446744073709551615)
00000008:00000020:9.0:1367513105.155037:0:47294:0:(osc_cache.c:2913:osc_cache_writeback_range()) obj ffff88041fb0ade8 ready 1|-|+ wr 8192|-|- rd 0|- cache page out.
00000008:00000020:9.0:1367513105.155039:0:47294:0:(osc_cache.c:2913:osc_cache_writeback_range()) obj ffff88041fb0acc8 ready 0|-|- wr 0|-|- rd 0|- cache page out.
00000080:00200000:9.0:1367513105.155049:0:47294:0:(vvp_io.c:128:vvp_io_fini()) ignore/verify layout 1/0, layout version 0.
00000100:00100000:1.0F:1367513107.752047:0:44097:0:(client.c:1441:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_3:f743e3ca-f5f4-8139-6469-a6c36db7ecb7:44097:1433871909999208:10.1.1.211@o2ib9:400
00000100:00100000:11.0F:1367513107.752052:0:44098:0:(client.c:1441:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_4:f743e3ca-f5f4-8139-6469-a6c36db7ecb7:44098:1433871909999212:10.1.1.1@o2ib9:400
00000100:00100000:4.0F:1367513107.752071:0:44099:0:(client.c:1441:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_5:f743e3ca-f5f4-8139-6469-a6c36db7ecb7:44099:1433871909999216:10.1.1.2@o2ib9:400
00000100:00100000:12.0F:1367513107.752082:0:44096:0:(client.c:1441:ptlrpc_send_new_req()) Sending RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_2:265c93ca-bba0-ff90-7573-ba7d6f1e6717:44096:1433871909999204:10.1.1.211@o2ib9:400
00000100:00100000:1.0:1367513107.752689:0:44097:0:(client.c:1805:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_3:f743e3ca-f5f4-8139-6469-a6c36db7ecb7:44097:1433871909999208:10.1.1.211@o2ib9:400
00000100:00100000:0.0F:1367513107.752689:0:44099:0:(client.c:1805:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_5:f743e3ca-f5f4-8139-6469-a6c36db7ecb7:44099:1433871909999216:10.1.1.2@o2ib9:400
00000100:00100000:8.0F:1367513107.752733:0:44096:0:(client.c:1805:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_2:265c93ca-bba0-ff90-7573-ba7d6f1e6717:44096:1433871909999204:10.1.1.211@o2ib9:400
00000100:00100000:11.0:1367513107.752735:0:44098:0:(client.c:1805:ptlrpc_check_set()) Completed RPC pname:cluuid:pid:xid:nid:opc ptlrpcd_4:f743e3ca-f5f4-8139-6469-a6c36db7ecb7:44098:1433871909999212:10.1.1.1@o2ib9:400
00020000:00200000:9.0:1367513110.155030:0:47294:0:(lov_io.c:415:lov_io_iter_init()) shrink: 0 [34359742464, 18446744073709551615)
00020000:00200000:9.0:1367513110.155037:0:47294:0:(lov_io.c:415:lov_io_iter_init()) shrink: 1 [34359738368, 18446744073709551615)
00000008:00000020:9.0:1367513110.155039:0:47294:0:(osc_cache.c:2913:osc_cache_writeback_range()) obj ffff88041fb0ade8 ready 1|-|+ wr 8192|-|- rd 0|- cache page out.
00000008:00000020:9.0:1367513110.155040:0:47294:0:(osc_cache.c:2913:osc_cache_writeback_range()) obj ffff88041fb0acc8 ready 0|-|- wr 0|-|- rd 0|- cache page out.
00000080:00200000:9.0:1367513110.155052:0:47294:0:(vvp_io.c:128:vvp_io_fini()) ignore/verify layout 1/0, layout version 0.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="57545" author="green" created="Thu, 2 May 2013 17:53:02 +0000"  >&lt;p&gt;So just to be totally clear, can you try the same master snapshot without your patch?&lt;br/&gt;
Not only you remove the semaphore, Andreas thinks that the way you switch to __generic_file_aio_write might be confusing to vfs as it might not expect multiple threads on a single client to be able to write to the same file as it depends on the mutex there.&lt;/p&gt;</comment>
                            <comment id="57556" author="prakash" created="Thu, 2 May 2013 18:28:55 +0000"  >&lt;p&gt;Sure, I&apos;ll try to reproduce without modifications to the master branch. FWIW, I did not hit this issue when running the stock master tag earlier, but I&apos;ll re-run that test a few times to see if I just got lucky or if it is in fact the VFS usage change causing this.&lt;/p&gt;</comment>
                            <comment id="66677" author="lflis" created="Sat, 14 Sep 2013 12:03:39 +0000"  >&lt;p&gt;Dear All,&lt;/p&gt;

&lt;p&gt;We just hit this issue in 2.4.0 and 2.4.1-RC2 on ~100(of 1200) of clients.&lt;br/&gt;
I just updated &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2576&quot; title=&quot;Hangs in osc_enter_cache due to dirty pages not being flushed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2576&quot;&gt;&lt;del&gt;LU-2576&lt;/del&gt;&lt;/a&gt; with the details.&lt;br/&gt;
As we still keep some clients hanging we can provide more debugging information on this.&lt;/p&gt;

&lt;p&gt;Best Regards&lt;br/&gt;
&amp;#8211;&lt;br/&gt;
Lukasz Flis&lt;br/&gt;
ACC Cyfronet&lt;/p&gt;</comment>
                            <comment id="99204" author="adilger" created="Fri, 14 Nov 2014 18:44:17 +0000"  >&lt;p&gt;This was fixed as part of the original &lt;a href=&quot;http://review.whamcloud.com/6320&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6320&lt;/a&gt; patch landing.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10120">
                    <name>Blocker</name>
                                            <outwardlinks description="is blocking">
                                        <issuelink>
            <issuekey id="15301">LU-1669</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="17090">LU-2576</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvpyf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8072</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>