<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:08:27 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-582] 1.8&lt;-&gt;2.1 interop: sanity test_132: FAIL: some glimpse RPC is expected</title>
                <link>https://jira.whamcloud.com/browse/LU-582</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After the upgrading, sanity test 132 failed on Lustre 2.0.66.0 as follows:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== sanity test 132: som avoids glimpse rpc == 03:17:56 (1312885076)
====&amp;gt; SOM is disabled, 0 glimpse RPC occured
 sanity test_132: @@@@@@ FAIL: some glimpse RPC is expected 
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Please refer to the Maloo report for more logs: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/a570d34e-c278-11e0-8bdf-52540025f9af&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/a570d34e-c278-11e0-8bdf-52540025f9af&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an known issue: &lt;a href=&quot;https://bugzilla.lustre.org/show_bug.cgi?id=23339&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;bug 23339&lt;/a&gt;.&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
Old Lustre Version: 1.8.6-wc1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-b1_8/100/&quot;&gt;http://newbuild.whamcloud.com/job/lustre-b1_8/100/&lt;/a&gt;&lt;br/&gt;
&lt;br/&gt;
New Lustre Version: 2.0.66.0&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://newbuild.whamcloud.com/job/lustre-master/228/&quot;&gt;http://newbuild.whamcloud.com/job/lustre-master/228/&lt;/a&gt;&lt;br/&gt;
&lt;br/&gt;
Clean upgrading (Lustre servers and clients were upgraded all at once) from Lustre 1.8.6-wc1 to Lustre 2.0.66.0 under the following configuration:&lt;br/&gt;
&lt;br/&gt;
OSS1: RHEL5/x86_64&lt;br/&gt;
OSS2: RHEL5/x86_64&lt;br/&gt;
MDS: RHEL5/x86_64&lt;br/&gt;
Client1: RHEL6/x86_64&lt;br/&gt;
Client2: RHEL5/x86_64&lt;br/&gt;
</environment>
        <key id="11469">LU-582</key>
            <summary>1.8&lt;-&gt;2.1 interop: sanity test_132: FAIL: some glimpse RPC is expected</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="hongchao.zhang">Hongchao Zhang</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Tue, 9 Aug 2011 07:40:17 +0000</created>
                <updated>Tue, 9 Apr 2013 03:02:45 +0000</updated>
                            <resolved>Wed, 20 Feb 2013 09:57:58 +0000</resolved>
                                    <version>Lustre 2.0.0</version>
                    <version>Lustre 2.1.2</version>
                    <version>Lustre 2.1.3</version>
                    <version>Lustre 1.8.8</version>
                    <version>Lustre 1.8.6</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                    <fixVersion>Lustre 2.1.5</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="19537" author="simmonsja" created="Tue, 23 Aug 2011 13:01:05 +0000"  >&lt;p&gt;This is also failing with Lustre 2.1 clients.&lt;/p&gt;

&lt;p&gt;== sanity test 132: som avoids glimpse rpc =========================================================== 12:59:06 (1314118746)&lt;br/&gt;
====&amp;gt; SOM is disabled, 0 glimpse RPC occured&lt;br/&gt;
 sanity test_132: @@@@@@ FAIL: some glimpse RPC is expected&lt;br/&gt;
Dumping lctl log to /tmp/test_logs//1314118691/sanity.test_132.*.1314118756.log&lt;/p&gt;</comment>
                            <comment id="19539" author="simmonsja" created="Tue, 23 Aug 2011 13:24:25 +0000"  >&lt;p&gt;Updated logs for the failure from one of my clients to your ftp site. Its in uploads/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-582&quot; title=&quot;1.8&amp;lt;-&amp;gt;2.1 interop: sanity test_132: FAIL: some glimpse RPC is expected&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-582&quot;&gt;&lt;del&gt;LU-582&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="27180" author="simmonsja" created="Mon, 23 Jan 2012 09:27:44 +0000"  >&lt;p&gt;Sanity test 132 no longer fails. Youc an close this ticket.&lt;/p&gt;</comment>
                            <comment id="27183" author="pjones" created="Mon, 23 Jan 2012 09:50:12 +0000"  >&lt;p&gt;Thanks James!&lt;/p&gt;</comment>
                            <comment id="28140" author="simmonsja" created="Wed, 8 Feb 2012 08:23:31 +0000"  >&lt;p&gt;Peter can you reopen this ticket.&lt;/p&gt;</comment>
                            <comment id="28143" author="pjones" created="Wed, 8 Feb 2012 09:10:23 +0000"  >&lt;p&gt;James&lt;/p&gt;

&lt;p&gt;Can you please confirm that the 1.8.x client that you are testing with is at least version 1.8.7-wc1 (not the Oracle 1.8.7)?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="28150" author="simmonsja" created="Wed, 8 Feb 2012 09:26:47 +0000"  >&lt;p&gt;Originally this ticket reported a interop problem but I noted it that its fails with 2.X clients. It stopped failing but now its fails again with Lustre 2.X clients. prehaps this ticket shoudl be retitled.&lt;/p&gt;</comment>
                            <comment id="28152" author="pjones" created="Wed, 8 Feb 2012 09:34:05 +0000"  >&lt;p&gt;James&lt;/p&gt;

&lt;p&gt;I think that a new ticket would be simpler\clearer&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="38965" author="adilger" created="Thu, 17 May 2012 03:03:42 +0000"  >&lt;p&gt;This is failing 16% of the time currently, with 2.6.18 clients vs. 2.6.32 servers:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/35a3f450-9faa-11e1-b416-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/35a3f450-9faa-11e1-b416-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="40188" author="yujian" created="Thu, 7 Jun 2012 10:09:34 +0000"  >&lt;p&gt;Old Lustre Version: 1.8.8-wc1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b1_8/198/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b1_8/198/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;New Lustre Version: 2.1.2&lt;br/&gt;
Lustre Tag: v2_1_2_RC2&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/86/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/86/&lt;/a&gt;&lt;br/&gt;
Network: TCP (1GigE)&lt;/p&gt;

&lt;p&gt;After clean upgrading with the following configuration, sanity test 132 still failed:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;OSS1: RHEL5/x86_64
OSS2: RHEL5/x86_64
MDS: RHEL5/x86_64
Client1: RHEL6/x86_64
Client2: RHEL6/x86_64
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;client-1: == sanity test 132: som avoids glimpse rpc == 04:43:30 (1339069410)
client-1: ====&amp;gt; SOM is disabled, 0 glimpse RPC occured
client-1:  sanity test_132: @@@@@@ FAIL: some glimpse RPC is expected
client-1: Dumping lctl log to /home/yujian/test_logs/2012-06-07/043300/sanity.test_132.*.1339069414.log
client-1: FAIL 132 (5s)
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/c250c2d6-b0d3-11e1-99ce-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/c250c2d6-b0d3-11e1-99ce-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="40940" author="jaylan" created="Wed, 20 Jun 2012 13:39:18 +0000"  >&lt;p&gt;Is there a new ticket for 2.x client?&lt;br/&gt;
I ran into this problem yesterday. Both server and client are 2.1.2.&lt;/p&gt;</comment>
                            <comment id="43228" author="yujian" created="Tue, 14 Aug 2012 21:17:42 +0000"  >&lt;p&gt;After clean upgrading from Lustre 1.8.8-wc1 to 2.1.3 RC1, the issue occurred again:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/57d3cf2c-e673-11e1-afac-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/57d3cf2c-e673-11e1-afac-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="43239" author="hongchao.zhang" created="Wed, 15 Aug 2012 04:23:44 +0000"  >&lt;p&gt;there is indeed glimpse request during calling &apos;stat $DIR/$tfile&apos;&lt;/p&gt;

&lt;p&gt;in client-7&lt;/p&gt;

&lt;p&gt;00000001:00010000:2.0:1344970176.071156:0:12030:0:(glimpse.c:120:cl_glimpse_lock()) Glimpsing inode &lt;span class=&quot;error&quot;&gt;&amp;#91;0x20b2735b0:0x1:0x0&amp;#93;&lt;/span&gt;&lt;br/&gt;
00000020:00010000:2.0:1344970176.071177:0:12030:0:(cl_lock.c:143:cl_lock_trace0()) enqueue lock: ffff8802e6a12938@(2 ffff8802decb8ae0 1 0 0 1 1 0)(ffff880331731d80/0/1) at cl_enqueue_try():1189&lt;br/&gt;
00000020:00010000:2.0:1344970176.071182:0:12030:0:(cl_lock.c:143:cl_lock_trace0()) enclosure lock: ffff8802e6a12af8@(1 (null) 0 0 0 1 1 0)(ffff880331731d80/1/1) at cl_lock_enclosure():1685&lt;br/&gt;
00000020:00010000:2.0:1344970176.071186:0:12030:0:(cl_lock.c:143:cl_lock_trace0()) enclosure lock: ffff8802e6a12938@(2 ffff8802decb8ae0 1 1 0 1 1 0)(ffff880331731d80/0/2) at cl_lock_enclosure():1685&lt;br/&gt;
00000020:00010000:2.0:1344970176.071190:0:12030:0:(cl_lock.c:143:cl_lock_trace0()) enqueue lock: ffff8802e6a12af8@(2 ffff8802decb8ae0 1 0 0 1 1 0)(ffff880331731aa0/1/0) at cl_enqueue_try():1189&lt;/p&gt;


&lt;p&gt;in fat-intel-2&lt;/p&gt;

&lt;p&gt;00010000:00000001:13.0:1344970176.078486:0:7796:0:(ldlm_lockd.c:1065:ldlm_handle_enqueue0()) Process entered&lt;br/&gt;
00010000:00010000:13.0:1344970176.078487:0:7796:0:(ldlm_lockd.c:1067:ldlm_handle_enqueue0()) ### server-side enqueue handler START&lt;br/&gt;
00010000:00000001:13.0:1344970176.078489:0:7796:0:(ldlm_lockd.c:1419:ldlm_request_cancel()) Process entered&lt;br/&gt;
00010000:00000001:13.0:1344970176.078489:0:7796:0:(ldlm_lockd.c:1423:ldlm_request_cancel()) Process leaving (rc=0 : 0 : 0)&lt;br/&gt;
00010000:00000001:13.0:1344970176.078491:0:7796:0:(ldlm_lock.c:1245:ldlm_lock_create()) Process entered&lt;br/&gt;
00010000:00000010:13.0:1344970176.078494:0:7796:0:(ldlm_resource.c:983:ldlm_resource_new()) slab-alloced &apos;(res)&apos;: 320 at ffff8103058d1940.&lt;br/&gt;
00002000:00000001:13.0:1344970176.078496:0:7796:0:(filter_lvb.c:76:filter_lvbo_init()) Process entered&lt;br/&gt;
00002000:00000010:13.0:1344970176.078498:0:7796:0:(filter_lvb.c:84:filter_lvbo_init()) kmalloced &apos;lvb&apos;: 40 at ffff810320fdd880.&lt;br/&gt;
00002000:00000002:13.0:1344970176.078500:0:7796:0:(filter_lvb.c:96:filter_lvbo_init()) lustre-OST0000: filter_lvbo_init(o_seq=0, o_id=70152)&lt;br/&gt;
00002000:00000001:13.0:1344970176.078502:0:7796:0:(filter.c:1474:filter_fid2dentry()) Process entered&lt;br/&gt;
00002000:00000002:13.0:1344970176.078504:0:7796:0:(filter.c:1499:filter_fid2dentry()) looking up object O/d8/70152&lt;br/&gt;
00002000:00000002:13.0:1344970176.078507:0:7796:0:(filter.c:1518:filter_fid2dentry()) got child objid 70152: ffff810320f62660, count = 1&lt;br/&gt;
00002000:00000001:13.0:1344970176.078509:0:7796:0:(filter.c:1522:filter_fid2dentry()) Process leaving (rc=18446604449170728544 : -139624538823072 : ffff810320f62660)&lt;br/&gt;
00002000:00010000:13.0:1344970176.078511:0:7796:0:(filter_lvb.c:116:filter_lvbo_init()) res: 0x11208 initial lvb size: 0x200, mtime: 0x502a9dbf, blocks: 0x8&lt;br/&gt;
00002000:00000001:13.0:1344970176.078514:0:7796:0:(filter_lvb.c:120:filter_lvbo_init()) Process leaving&lt;br/&gt;
00002000:00000002:13.0:1344970176.078514:0:7796:0:(filter.c:221:f_dput()) putting 70152: ffff810320f62660, count = 0&lt;br/&gt;
00010000:00000001:13.0:1344970176.078516:0:7796:0:(ldlm_lock.c:413:ldlm_lock_new()) Process entered&lt;br/&gt;
00010000:00000010:13.0:1344970176.078518:0:7796:0:(ldlm_lock.c:418:ldlm_lock_new()) slab-alloced &apos;(lock)&apos;: 560 at ffff810305f58240.&lt;br/&gt;
00000020:00000001:13.0:1344970176.078520:0:7796:0:(lustre_handles.c:88:class_handle_hash()) Process entered&lt;br/&gt;
00000020:00000040:13.0:1344970176.078521:0:7796:0:(lustre_handles.c:122:class_handle_hash()) added object ffff810305f58240 with handle 0xfa6adb28e6af43de to hash&lt;br/&gt;
00000020:00000001:13.0:1344970176.078524:0:7796:0:(lustre_handles.c:123:class_handle_hash()) Process leaving&lt;br/&gt;
00010000:00000001:13.0:1344970176.078525:0:7796:0:(ldlm_lock.c:455:ldlm_lock_new()) Process leaving (rc=18446604448717701696 : -139624991849920 : ffff810305f58240)&lt;br/&gt;
00010000:00000001:13.0:1344970176.078528:0:7796:0:(ldlm_extent.c:806:ldlm_interval_alloc()) Process entered&lt;br/&gt;
00010000:00000010:13.0:1344970176.078530:0:7796:0:(ldlm_extent.c:809:ldlm_interval_alloc()) slab-alloced &apos;(node)&apos;: 72 at ffff81030d7ceac0.&lt;br/&gt;
00010000:00000001:13.0:1344970176.078532:0:7796:0:(ldlm_extent.c:815:ldlm_interval_alloc()) Process leaving (rc=18446604448844016320 : -139624865535296 : ffff81030d7ceac0)&lt;br/&gt;
00010000:00000001:13.0:1344970176.078536:0:7796:0:(ldlm_lock.c:1284:ldlm_lock_create()) Process leaving (rc=18446604448717701696 : -139624991849920 : ffff810305f58240)&lt;br/&gt;
00010000:00010000:13.0:1344970176.078539:0:7796:0:(ldlm_lockd.c:1152:ldlm_handle_enqueue0()) ### server-side enqueue handler, new lock created ns: filter-lustre-OST0000_UUID lock: ffff810305f58240/0xfa6adb28e6af43de lrc: 2/0,0 mode: -&lt;del&gt;/PR res: 70152/0 rrc: 1 type: EXT &lt;span class=&quot;error&quot;&gt;&amp;#91;0-&amp;gt;0&amp;#93;&lt;/span&gt; (req 0&lt;/del&gt;&amp;gt;0) flags: 0x0 remote: 0xeb07c50dcc271870 expref: -99 pid: 7796 timeout 0&lt;br/&gt;
00000020:00000040:13.0:1344970176.078546:0:7796:0:(genops.c:1064:__class_export_add_lock_ref()) lock = ffff810305f58240, export = ffff81030c5bbc00, refs = 1&lt;br/&gt;
00010000:00000040:13.0:1344970176.078548:0:7796:0:(ldlm_lockd.c:1164:ldlm_handle_enqueue0()) lock GETting export ffff81030c5bbc00 : new locks_count 16&lt;br/&gt;
00000020:00000040:13.0:1344970176.078550:0:7796:0:(genops.c:782:class_export_get()) GETting export ffff81030c5bbc00 : new refcount 22&lt;br/&gt;
00010000:00000001:13.0:1344970176.078553:0:7796:0:(ldlm_lock.c:1302:ldlm_lock_enqueue()) Process entered&lt;br/&gt;
00002000:00000001:13.0:1344970176.078554:0:7796:0:(filter.c:1696:filter_intent_policy()) Process entered&lt;/p&gt;

&lt;p&gt;the filter_intent_policy is called!&lt;/p&gt;</comment>
                            <comment id="43240" author="hongchao.zhang" created="Wed, 15 Aug 2012 04:29:29 +0000"  >&lt;p&gt;the logs of client and OST&lt;/p&gt;</comment>
                            <comment id="43324" author="hongchao.zhang" created="Thu, 16 Aug 2012 07:50:00 +0000"  >&lt;p&gt;Yujian help to reproduce the bug after upgrading Lustre, and the ldlm_glimpse_enqueue is indeed increased.&lt;/p&gt;

&lt;p&gt;the debug patch is tracked at &lt;a href=&quot;http://review.whamcloud.com/#change,3692&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3692&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hi Yujian, could you please help to test with the debug patch? thanks!!&lt;/p&gt;</comment>
                            <comment id="43390" author="yujian" created="Thu, 16 Aug 2012 21:41:47 +0000"  >&lt;blockquote&gt;&lt;p&gt;the debug patch is tracked at &lt;a href=&quot;http://review.whamcloud.com/#change,3692&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3692&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;The real patch is in &lt;a href=&quot;http://review.whamcloud.com/#change,3693&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3693&lt;/a&gt;. Testing is ongoing.&lt;/p&gt;</comment>
                            <comment id="43396" author="hongchao.zhang" created="Fri, 17 Aug 2012 04:14:26 +0000"  >&lt;p&gt;there is a bug in get_ost_param&lt;/p&gt;

&lt;p&gt;    get_ost_param() {&lt;br/&gt;
        local token=$1&lt;br/&gt;
        local gl_sum=0&lt;br/&gt;
        for node in $(osts_nodes); do&lt;br/&gt;
                gl=$(do_node $node &quot;$LCTL get_param -n ost.OSS.ost.stats&quot; | awk &apos;/&apos;$token&apos;/ &lt;/p&gt;
{print $2}
&lt;p&gt;&apos; | head -n 1)&lt;br/&gt;
                [ x$gl = x&quot;&quot; ] &amp;amp;&amp;amp; gl=0&lt;br/&gt;
                gl_sum=$((gl_sum + gl))&lt;br/&gt;
        done    &lt;br/&gt;
        echo $gl    &amp;lt;--- here should be &quot;echo $gl_sum&quot;!&lt;br/&gt;
    }&lt;/p&gt;

&lt;p&gt;previous, there is only one OSS in autotest, then the problem is hidden, but in upgrade test, there are 2 OSS, &lt;br/&gt;
then the issue shows up. the updated patch will be attached soon!&lt;/p&gt;</comment>
                            <comment id="43398" author="hongchao.zhang" created="Fri, 17 Aug 2012 04:27:24 +0000"  >&lt;p&gt;the patch has been updated&lt;/p&gt;</comment>
                            <comment id="49823" author="jaylan" created="Mon, 31 Dec 2012 18:18:13 +0000"  >&lt;p&gt;I saw this problem between 2.1.3 server and 2.3.0 client. Let me know if you want the test_logs.&lt;/p&gt;</comment>
                            <comment id="52018" author="schamp" created="Thu, 7 Feb 2013 21:27:31 +0000"  >&lt;p&gt;I&apos;ve been using &lt;a href=&quot;http://review.whamcloud.com/#change,3693&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3693&lt;/a&gt; for&lt;br/&gt;
several months, and recommend it for b2_1.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="11777" name="sanity.test_132.debug_log.client-7.1344970176.log" size="205026" author="hongchao.zhang" created="Wed, 15 Aug 2012 04:29:29 +0000"/>
                            <attachment id="11778" name="sanity.test_132.debug_log.fat-intel-2.1344970176.log" size="281616" author="hongchao.zhang" created="Wed, 15 Aug 2012 04:29:29 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                    <customfield id="customfield_10020" key="com.atlassian.jira.plugin.system.customfieldtypes:float">
                        <customfieldname>Bugzilla ID</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>23339.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv4br:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4234</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>