<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:25:07 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2429] easy to find bad client</title>
                <link>https://jira.whamcloud.com/browse/LU-2429</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;we have a network problem at the customer site, the clients are still running, but network is unstable. In that situation, sometimes Lustre servers refuses new connections due to still waiting some active RPC finish.&lt;/p&gt;

&lt;p&gt;e.g.)&lt;br/&gt;
Nov  6 10:51:00 oss212 kernel: Lustre: 21280:0:(ldlm_lib.c:874:target_handle_connect()) LARGE01-OST004c: refuse reconnection from 6279e611-9d6b-3d6a-bab4-e76cf925282f@560@gni to 0xffff81043d807a00; still busy with 1 active RPCs&lt;br/&gt;
Nov  6 10:51:16 oss212 kernel: LustreError: 21337:0:(ldlm_lib.c:1919:target_send_reply_msg()) @@@ processing error (&lt;del&gt;107)  req@ffff8106a3c46400 x1415646605273905/t0 o400&lt;/del&gt;&amp;gt;&amp;lt;?&amp;gt;@&amp;lt;?&amp;gt;:0/0 lens 192/0 e 0 to 0 dl 1352166761 ref 1 fl Interpret:H/0/0 rc -107/0&lt;/p&gt;

&lt;p&gt;Some cases, we can find bad client and reboot them or evict servers and reconnect, then situation can be back.&lt;/p&gt;

&lt;p&gt;Howerver, most of cases, it&apos;s hard to find bad client, and keeping the error messages. If we can find bad client, new clients can&apos;t reconnect until all clients reboot. this is not good idea..&lt;/p&gt;

&lt;p&gt;Any good idea to easy find bad client when the above logs happen? &lt;/p&gt;</description>
                <environment>lustre 1.8.8 RHEL5</environment>
        <key id="16850">LU-2429</key>
            <summary>easy to find bad client</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="bfaccini">Bruno Faccini</assignee>
                                    <reporter username="ihara">Shuichi Ihara</reporter>
                        <labels>
                    </labels>
                <created>Tue, 4 Dec 2012 22:37:49 +0000</created>
                <updated>Sat, 23 Feb 2013 01:21:59 +0000</updated>
                            <resolved>Sat, 23 Feb 2013 01:21:59 +0000</resolved>
                                    <version>Lustre 1.8.x (1.8.0 - 1.8.5)</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="48803" author="johann" created="Wed, 5 Dec 2012 08:38:42 +0000"  >&lt;p&gt;Ihara, the message actually prints the nid (i.e. 560@gni). Normally, such RPCs should be aborted after some time and the client should then be able to reconnect. Is it the case?&lt;/p&gt;</comment>
                            <comment id="48804" author="ihara" created="Wed, 5 Dec 2012 09:12:30 +0000"  >&lt;p&gt;Hi Johan,&lt;br/&gt;
So, &quot;still busy with 1 active RPCs&quot; means reconnected client&apos;s RPC is sitll remained?&lt;br/&gt;
yes, it&apos;s aborted normally, but sometimes, it doesn&apos;t abort and the client can&apos;t reconnect forever.&lt;br/&gt;
I wonder if we can do force abort and skip waiting for this processing.&lt;/p&gt;</comment>
                            <comment id="48805" author="bfaccini" created="Wed, 5 Dec 2012 09:15:44 +0000"  >&lt;p&gt;You can also monitor the log/msgs directly on all Clients and /proc/fs/lustre/osc/*/state, it will give you the picture from Clients side.&lt;/p&gt;

&lt;p&gt;But don&apos;t forget that if you suspect network/interconnect problems, you better have to 1st troubleshoot it using appropriated tools.&lt;/p&gt;</comment>
                            <comment id="48807" author="ihara" created="Wed, 5 Dec 2012 09:44:18 +0000"  >&lt;p&gt;Bruno, yes, understood, although in this case, the network problem causes this situation, the problem is that we sometimes saw this problem even if the network problem doesn&apos;t happen. I want to avoid this still active RPC and evict that client manually otherwise we need to wait very long time to reconnect.&lt;/p&gt;</comment>
                            <comment id="48812" author="bfaccini" created="Wed, 5 Dec 2012 11:43:12 +0000"  >&lt;p&gt;BTW, are there any msgs on Client, let say 560@gni for example from you Server logs, side around the same time ??&lt;/p&gt;

&lt;p&gt;Also, is there any way to get some debug analysis (live &quot;crash&quot; tool session, Alt+SysRq, ...) on client-side that may help to find if some thread is stuck ???&lt;/p&gt;</comment>
                            <comment id="48817" author="johann" created="Wed, 5 Dec 2012 14:02:37 +0000"  >&lt;blockquote&gt;
&lt;p&gt;So, &quot;still busy with 1 active RPCs&quot; means reconnected client&apos;s RPC is sitll remained?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;It means that there is still a service thread processing a request from the previous connection which prevents the client from reconnecting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;yes, it&apos;s aborted normally,&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;ok&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt; but sometimes, it doesn&apos;t abort and the client can&apos;t reconnect forever.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;That&apos;s not normal. In this case, you should see watchdogs on the server side and the stack trace would help us understanding where the service thread is stuck.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I wonder if we can do force abort and skip waiting for this processing.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;I&apos;m afraid that we can&apos;t &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.whamcloud.com/images/icons/emoticons/sad.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="48818" author="adilger" created="Wed, 5 Dec 2012 14:05:26 +0000"  >&lt;p&gt;As Johann mentions, the client NID is in this message. It is also possible to grep for the client UUID (6279e611-9d6b-3d6a-bab4-e76cf925282f in this case) in /proc/fs/lustre/obdfilter/LARGE01-OST004c/exports/*/uuid.&lt;/p&gt;

&lt;p&gt;Note that there is a bug open for the &quot;still busy&quot; problem (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-793&quot; title=&quot;Reconnections should not be refused when there is a request in progress from this client.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-793&quot;&gt;&lt;del&gt;LU-793&lt;/del&gt;&lt;/a&gt;), and I believe Oleg had a patch for this (&lt;a href=&quot;http://review.whamcloud.com/1616&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/1616&lt;/a&gt;). The last time I spoke with him about this he wasn&apos;t quite happy with the patch, but maybe someone else could look into fixing and landing it?  I think this is a common and long-standing problem and it would be nice to fix it if possible. &lt;/p&gt;</comment>
                            <comment id="48884" author="bfaccini" created="Thu, 6 Dec 2012 17:31:20 +0000"  >
&lt;p&gt;We need to progress here and at least to try understand the real conditions when your problem occurs.&lt;br/&gt;
So can you at least provide us the syslogs, covering the problem time-frame, of one concerned Client and associated Server/OSS ??&lt;br/&gt;
Thank&apos;s.&lt;/p&gt;</comment>
                            <comment id="48896" author="johann" created="Fri, 7 Dec 2012 02:21:48 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Note that there is a bug open for the &quot;still busy&quot; problem (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-793&quot; title=&quot;Reconnections should not be refused when there is a request in progress from this client.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-793&quot;&gt;&lt;del&gt;LU-793&lt;/del&gt;&lt;/a&gt;), and I believe Oleg had a patch for this (&lt;a href=&quot;http://review.whamcloud.com/1616&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/1616&lt;/a&gt;).&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;While i agree that we should consider removing this protection, i think we first need to understand how a service thread can be stuck forever as reported by Ihara.&lt;/p&gt;

&lt;p&gt;Ihara, there should definitely be some watchdogs printed on the console. It would be very helpful if you could provide us with those logs. Otherwise, there is not much we can do, i&apos;m afraid.&lt;/p&gt;</comment>
                            <comment id="48963" author="ihara" created="Mon, 10 Dec 2012 07:02:19 +0000"  >&lt;p&gt;I saw &quot;still busy with x active RPCs&quot; problems a couple of time, and posted on here in general.&lt;br/&gt;
But, just in now, we got same problem at the one of our customers. I think it should be a root cause, but want to find what client is stacking RPCs? can we find bad client from following logs on OSS?&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;# grep &quot;still busy&quot; 20121210_t2s007037.log 
Dec 10 19:15:28 t2s007037 kernel: Lustre: 16504:0:(ldlm_lib.c:874:target_handle_connect()) gscr0-OST0000: refuse reconnection from 448ded8b-6867-b4e3-b095-24a1194a0311@192.168.20.53@tcp1 to 0xffff81060f828e00; still busy with 4 active RPCs
Dec 10 19:15:28 t2s007037 kernel: Lustre: 20370:0:(ldlm_lib.c:874:target_handle_connect()) gscr0-OST0000: refuse reconnection from 98aabcbb-79bf-0dd8-3a0e-f869054aa095@192.168.19.31@tcp1 to 0xffff81028b12ba00; still busy with 4 active RPCs
Dec 10 19:15:28 t2s007037 kernel: Lustre: 5499:0:(ldlm_lib.c:874:target_handle_connect()) gscr0-OST0000: refuse reconnection from 1e0c4bbc-b2a9-1268-afaf-811307e85c34@192.168.19.80@tcp1 to 0xffff81006d77b600; still busy with 3 active RPCs
Dec 10 19:15:31 t2s007037 kernel: Lustre: 5534:0:(ldlm_lib.c:874:target_handle_connect()) gscr0-OST0000: refuse reconnection from 8e966893-d9e9-3508-a406-c2132095af5f@10.1.10.84@o2ib to 0xffff81018682a200; still busy with 8 active RPCs
Dec 10 19:15:33 t2s007037 kernel: Lustre: 16481:0:(ldlm_lib.c:874:target_handle_connect()) gscr0-OST0000: refuse reconnection from c2b698fa-a4d9-ff0c-6dc5-298134339777@192.168.19.50@tcp1 to 0xffff8100d2fb0400; still busy with 8 active RPCs
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="48964" author="ihara" created="Mon, 10 Dec 2012 07:04:22 +0000"  >&lt;p&gt;full OSS&quot;s messages attached. &lt;/p&gt;</comment>
                            <comment id="48965" author="johann" created="Mon, 10 Dec 2012 08:09:16 +0000"  >&lt;blockquote&gt;
&lt;p&gt;full OSS&quot;s messages attached.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Ihara, threads are stuck waiting for commit. Any chance to collect the output of a sysrq-t (or even better a crash dump)?&lt;/p&gt;</comment>
                            <comment id="48966" author="ihara" created="Mon, 10 Dec 2012 08:22:35 +0000"  >&lt;p&gt;this is OSS&apos;s sysrq-t output that we got right now.&lt;/p&gt;</comment>
                            <comment id="48972" author="johann" created="Mon, 10 Dec 2012 09:02:04 +0000"  >&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt; jbd2/dm-0-8   D ffff8101d86aa860     0 16413    247         16414 16412 (L-TLB)
  ffff8102dc6edb90 0000000000000046 0000000000000282 0000000000000008
  ffff8101b3c483c0 000000000000000a ffff81060d6a1860 ffff8101d86aa860
  0017bf15f0c46f86 0000000000000be8 ffff81060d6a1a48 0000000a0afb26b8
 Call Trace:
  [&amp;lt;ffffffff8006ece7&amp;gt;] do_gettimeofday+0x40/0x90
  [&amp;lt;ffffffff8005a40e&amp;gt;] getnstimeofday+0x10/0x29
  [&amp;lt;ffffffff80028bd3&amp;gt;] sync_page+0x0/0x42
  [&amp;lt;ffffffff800637de&amp;gt;] io_schedule+0x3f/0x67
  [&amp;lt;ffffffff80028c11&amp;gt;] sync_page+0x3e/0x42
  [&amp;lt;ffffffff80063922&amp;gt;] __wait_on_bit_lock+0x36/0x66
  [&amp;lt;ffffffff8003f9ab&amp;gt;] __lock_page+0x5e/0x64
  [&amp;lt;ffffffff800a34e5&amp;gt;] wake_bit_function+0x0/0x23
  [&amp;lt;ffffffff80047c5b&amp;gt;] pagevec_lookup_tag+0x1a/0x21
  [&amp;lt;ffffffff8001d035&amp;gt;] mpage_writepages+0x14f/0x37d
  [&amp;lt;ffffffff88a87bc0&amp;gt;] :ldiskfs:ldiskfs_writepage+0x0/0x3a0
  [&amp;lt;ffffffff800a34c0&amp;gt;] autoremove_wake_function+0x9/0x2e
  [&amp;lt;ffffffff8008d2a9&amp;gt;] __wake_up_common+0x3e/0x68
  [&amp;lt;ffffffff88a622b4&amp;gt;] :jbd2:jbd2_journal_commit_transaction+0x36c/0x1120
  [&amp;lt;ffffffff8004ad55&amp;gt;] try_to_del_timer_sync+0x7f/0x88
  [&amp;lt;ffffffff88a6623e&amp;gt;] :jbd2:kjournald2+0x9a/0x1ec
  [&amp;lt;ffffffff800a34b7&amp;gt;] autoremove_wake_function+0x0/0x2e
  [&amp;lt;ffffffff88a661a4&amp;gt;] :jbd2:kjournald2+0x0/0x1ec
  [&amp;lt;ffffffff800a329f&amp;gt;] keventd_create_kthread+0x0/0xc4
  [&amp;lt;ffffffff80032654&amp;gt;] kthread+0xfe/0x132
  [&amp;lt;ffffffff8005dfb1&amp;gt;] child_rip+0xa/0x11
  [&amp;lt;ffffffff800a329f&amp;gt;] keventd_create_kthread+0x0/0xc4
  [&amp;lt;ffffffff80032556&amp;gt;] kthread+0x0/0x132
  [&amp;lt;ffffffff8005dfa7&amp;gt;] child_rip+0x0/0x11
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;hm, this reminds me of &lt;a href=&quot;https://bugzilla.lustre.org/show_bug.cgi?id=21406#c75&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.lustre.org/show_bug.cgi?id=21406#c75&lt;/a&gt; which can happen if we somehow leave dirty pages in the OSS page cache (which shouldn&apos;t be the case) and the jbd2 thread tries to flush them.&lt;/p&gt;</comment>
                            <comment id="48983" author="ihara" created="Mon, 10 Dec 2012 10:34:39 +0000"  >&lt;p&gt;This might be same problem? &lt;a href=&quot;http://jira.whamcloud.com/browse/LU-1219&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;http://jira.whamcloud.com/browse/LU-1219&lt;/a&gt;&lt;br/&gt;
Also, data=writeback might help to prevent this kind of probem?&lt;/p&gt;</comment>
                            <comment id="48984" author="johann" created="Mon, 10 Dec 2012 10:56:47 +0000"  >&lt;blockquote&gt;
&lt;p&gt;This might be same problem? &lt;a href=&quot;http://jira.whamcloud.com/browse/LU-1219&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;http://jira.whamcloud.com/browse/LU-1219&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Yes, it looks similar.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Also, data=writeback might help to prevent this kind of probem?&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Yes, although i really would like to understand how we can end up with dirty pages in the inode mapping ...&lt;/p&gt;</comment>
                            <comment id="48994" author="ihara" created="Mon, 10 Dec 2012 12:53:31 +0000"  >&lt;p&gt;Johann,&lt;br/&gt;
data=writeback on the standard ext3/4 filesystem, no guarantee of ordering. (sometimes, journal may commit before data flush) So, is data=writeback safe with the lustre? and no re-ordering even writeback mode is enalbed on OST/MDT?&lt;br/&gt;
&lt;a href=&quot;https://bugzilla.lustre.org/show_bug.cgi?id=21406&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.lustre.org/show_bug.cgi?id=21406&lt;/a&gt;.. why this data=writeback mode wan&apos;t default option on the lustre even today?&lt;/p&gt;</comment>
                            <comment id="49001" author="bfaccini" created="Mon, 10 Dec 2012 13:41:34 +0000"  >&lt;p&gt;BTW, &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1219&quot; title=&quot;The connection is refused due to still busy with 1 active RPCs&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1219&quot;&gt;&lt;del&gt;LU-1219&lt;/del&gt;&lt;/a&gt; is still waitig for the Alt+SysRq+T logs you provided there!!&lt;/p&gt;

&lt;p&gt;Strange is that the SysRq output only shows 11 running tasks stacks fr your 12xCores OSS !! But this may come from the fact (option?) that the swapper/idle tasks stacks are not dumped ...&lt;/p&gt;

&lt;p&gt;I agree with you Johann, task/pid 16413 is the one blocking all others, but don&apos;t you think there could be some issue on the disks/storage/back-end side ???&lt;/p&gt;
</comment>
                            <comment id="49004" author="johann" created="Mon, 10 Dec 2012 14:48:09 +0000"  >&lt;p&gt;Ihara, it is safe to use data=writeback since lustre already pushes data to disk before committing, so you already have the ordering guarantee.&lt;/p&gt;

&lt;p&gt;Bruno, the stack trace shows that the jdb2 thread in charge of commit is waiting for some dirty pages to be flushed, which should never happen on the OSS. The issue is that we wait for commit with the pages locked, so there is a deadlock between the service threads and the jbd2 thread. Therefore, we should try to understand how we can end up with dirty pages in the page cache.&lt;/p&gt;</comment>
                            <comment id="49035" author="bfaccini" created="Tue, 11 Dec 2012 05:21:21 +0000"  >&lt;p&gt;Ihara, do you think you can take an OSS crash-dump ?? Because event if &quot;data=writeback&quot; seems to be a good work-around candidate and works finally, we need to understand how we fall in such situation where the jbd2 thread finds dirty-pages to flush when it should not !!&lt;/p&gt;</comment>
                            <comment id="49038" author="ihara" created="Tue, 11 Dec 2012 09:01:12 +0000"  >&lt;p&gt;Bruno, &lt;br/&gt;
Unfortunately, we couldn&apos;t get crashdump.. you need same jbd2 stack situation, right? if so, hope we can get it when the same probem happens sooner.&lt;br/&gt;
any another ideas we can test before decide to change data=writeback?&lt;/p&gt;</comment>
                            <comment id="49105" author="bfaccini" created="Wed, 12 Dec 2012 04:04:15 +0000"  >&lt;p&gt;No, I am afraid that only &quot;data=writeback&quot; can be thought as a work-around according to the problem you encounter. But again, it can only be used as a work-around and we need to understand your problem&apos;s root-cause because even running with it you may finally later end-up in an other hung situation ...&lt;/p&gt;</comment>
                            <comment id="49750" author="bfaccini" created="Fri, 28 Dec 2012 09:52:41 +0000"  >&lt;p&gt;Hello Ihara,&lt;br/&gt;
Any news on this issue ??&lt;br/&gt;
Have you been able to apply the work-around and/or get a new crash-dump ??&lt;br/&gt;
Bruno.&lt;/p&gt;</comment>
                            <comment id="50905" author="bfaccini" created="Mon, 21 Jan 2013 12:10:04 +0000"  >&lt;p&gt;Ihara, Any news ?? Please can you provide us with a status for this ticket ??&lt;/p&gt;</comment>
                            <comment id="51022" author="ihara" created="Wed, 23 Jan 2013 09:17:42 +0000"  >&lt;p&gt;Hi Bruno,&lt;/p&gt;

&lt;p&gt;Sorry, delayed updates on this. We haven&apos;t seen same problem and even been able to reproduce problem since the final crash we did...&lt;/p&gt;</comment>
                            <comment id="51345" author="bfaccini" created="Mon, 28 Jan 2013 12:55:58 +0000"  >&lt;p&gt;So do you agree to close this ticket ??&lt;/p&gt;</comment>
                            <comment id="52919" author="ihara" created="Fri, 22 Feb 2013 21:37:35 +0000"  >&lt;p&gt;if we hit same problem again at this site, I will reopen with new ticket, so, at this moment, please close this ticket.&lt;/p&gt;</comment>
                            <comment id="52920" author="pjones" created="Sat, 23 Feb 2013 01:21:59 +0000"  >&lt;p&gt;ok thanks Ihara&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="12244">LU-793</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="12083" name="20121210_t2s007037.log" size="247210" author="ihara" created="Mon, 10 Dec 2012 07:04:22 +0000"/>
                            <attachment id="12084" name="20121210_t2s007037_sysrq_t.log.tgz" size="153151" author="ihara" created="Mon, 10 Dec 2012 08:22:35 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvd9j:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5754</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>