<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:26:08 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2548] After upgrade from 1.8.8 to 2.4 hit qmt_entry.c:281:qmt_glb_write()) $$$ failed to update global index, rc:-5</title>
                <link>https://jira.whamcloud.com/browse/LU-2548</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After clean upgrade server and client from 1.8.8 to 2.4, I enabled quota with following steps:&lt;br/&gt;
1. before setup Lustre: tunefs.lustre --quota mdsdev/ostdev&lt;br/&gt;
2. after setup Lustre: lctl conf_param lustre.quota.mdt=ug&lt;br/&gt;
                       lctl conf_param lustre.quota.ost=ug&lt;/p&gt;

&lt;p&gt;then do iozone got this error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;upgrade-downgrade : @@@@@@ FAIL: iozone did not fail with EDQUOT
{noforamt}

found errors in mds dmesg:
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Lustre: DEBUG MARKER: ===== Pass ==================================================================&lt;br/&gt;
Lustre: DEBUG MARKER: ===== Check Lustre quotas usage/limits ======================================&lt;br/&gt;
Lustre: DEBUG MARKER: ===== Verify the data =======================================================&lt;br/&gt;
Lustre: ctl-lustre-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400):0:mdt&lt;br/&gt;
LDISKFS-fs warning (device sdb1): ldiskfs_block_to_path: block 1852143205 &amp;gt; max in inode 24537&lt;br/&gt;
LustreError: 7867:0:(qmt_entry.c:281:qmt_glb_write()) $$$ failed to update global index, rc:-5 qmt:lustre-QMT0000 pool:0-md id:60001 enforced:1 hard:5120 soft:0 granted:1024 time:0 qunit:1024 edquot:0 may_rel:0 revoke:4297684387&lt;br/&gt;
LustreError: 10848:0:(qsd_handler.c:344:qsd_req_completion()) $$$ DQACQ failed with -5, flags:0x1 qsd:lustre-MDT0000 qtype:usr id:60001 enforced:1 granted:3 pending:0 waiting:2 req:1 usage:3 qunit:0 qtune:0 edquot:0&lt;br/&gt;
Lustre: DEBUG MARKER: upgrade-downgrade : @@@@@@ FAIL: iozone did not fail with EDQUOT&lt;br/&gt;
LDISKFS-fs warning (device sdb1): ldiskfs_block_to_path: &lt;br/&gt;
LDISKFS-fs warning (device sdb1): ldiskfs_block_to_path: block 1852143205 &amp;gt; max in inode 24537&lt;br/&gt;
LustreError: 10877:0:(qmt_entry.c:281:qmt_glb_write()) $$$ failed to update global index, rc:-5 qmt:lustre-QMT0000 pool:0-md id:60001 enforced:1 hard:5120 soft:0 granted:1026 time:0 qunit:1024 edquot:0 may_rel:0 revoke:4297684387&lt;br/&gt;
LustreError: 7577:0:(qsd_handler.c:344:qsd_req_completion()) $$$ DQACQ failed with -5, flags:0x2 qsd:lustre-MDT0000 qtype:usr id:60001 enforced:1 granted:3 pending:0 waiting:0 req:1 usage:2 qunit:1024 qtune:512 edquot:0&lt;br/&gt;
LDISKFS-fs warning (device sdb1): ldiskfs_block_to_path: block 1852143205 &amp;gt; max in inode 24537&lt;br/&gt;
LDISKFS-fs warning (device sdb1): ldiskfs_block_to_path: block 1852143205 &amp;gt; max in inode 24537&lt;br/&gt;
block 1768711539 &amp;gt; max in inode 24538&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>before upgrade: client and server are running 1.8.8&lt;br/&gt;
after upgrade: client and server are running lustre-master build#1141</environment>
        <key id="17051">LU-2548</key>
            <summary>After upgrade from 1.8.8 to 2.4 hit qmt_entry.c:281:qmt_glb_write()) $$$ failed to update global index, rc:-5</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="sarah">Sarah Liu</reporter>
                        <labels>
                            <label>HB</label>
                    </labels>
                <created>Fri, 28 Dec 2012 19:38:06 +0000</created>
                <updated>Wed, 27 Feb 2013 03:05:17 +0000</updated>
                            <resolved>Wed, 27 Feb 2013 03:05:17 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="49808" author="niu" created="Mon, 31 Dec 2012 00:46:28 +0000"  >&lt;p&gt;Hi, Sarah. Could you enable TRACE to collect a debug log for this failure? Thanks.&lt;/p&gt;</comment>
                            <comment id="49836" author="sarah" created="Wed, 2 Jan 2013 12:51:53 +0000"  >&lt;p&gt;will keep you updated.&lt;/p&gt;</comment>
                            <comment id="49862" author="sarah" created="Thu, 3 Jan 2013 01:11:05 +0000"  >&lt;p&gt;debug and dmesg logs from MDS&lt;/p&gt;</comment>
                            <comment id="49935" author="niu" created="Fri, 4 Jan 2013 02:11:07 +0000"  >&lt;p&gt;This time the error messages are different from the first time. Sarah, is it reproduceable on 2.1 to 2.4 upgrading? How often did it happen on 1.8 -&amp;gt; 2.4 upgrading? Thanks.&lt;/p&gt;</comment>
                            <comment id="49976" author="sarah" created="Fri, 4 Jan 2013 16:43:32 +0000"  >&lt;p&gt;Niu, this time I upgraded to the latest tag-2.3.58, that&apos;s a different build from the first time.&lt;/p&gt;

&lt;p&gt;I will keep you updated when I finish upgrading from 2.1 to 2.4 and try again 1.8 to 2.4 to see if it happens every time.&lt;/p&gt;</comment>
                            <comment id="50084" author="sarah" created="Mon, 7 Jan 2013 19:03:20 +0000"  >&lt;p&gt;Niu, I tried upgrading 1.8-&amp;gt;2.4 again and it can be reproduced. &lt;/p&gt;</comment>
                            <comment id="50085" author="sarah" created="Mon, 7 Jan 2013 19:05:54 +0000"  >&lt;p&gt;MDS dmesg and debug logs of 1.8-&amp;gt;2.4&lt;/p&gt;</comment>
                            <comment id="50106" author="sarah" created="Tue, 8 Jan 2013 04:22:17 +0000"  >&lt;p&gt;upgrade from 2.1.4 to 2.4 hit &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2587&quot; title=&quot;Quota error after upgrade from 2.1.4 to 2.4&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2587&quot;&gt;&lt;del&gt;LU-2587&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="51609" author="niu" created="Fri, 1 Feb 2013 04:41:46 +0000"  >&lt;p&gt;I found something really weird in the dmesg (1.8 upgrade to 2.4):&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Lustre: lustre-MDT0000: Migrate inode quota from old admin quota file(admin_quotafile_v2.usr) to &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; IAM quota index([0x200000006:0x10000:0x0]).
Lustre: lustre-MDT0000: Migrate inode quota from old admin quota file(admin_quotafile_v2.grp) to &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; IAM quota index([0x200000006:0x1010000:0x0]).
Lustre: 31664:0:(mdt_handler.c:5261:mdt_process_config()) For interoperability, skip &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; mdt.group_upcall. It is obsolete.
Lustre: 31664:0:(mdt_handler.c:5261:mdt_process_config()) For interoperability, skip &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; mdt.quota_type. It is obsolete.
Lustre: lustre-MDT0000: Temporarily refusing client connection from 0@lo
LustreError: 11-0: an error occurred &lt;span class=&quot;code-keyword&quot;&gt;while&lt;/span&gt; communicating with 0@lo. The mds_connect operation failed with -11
Lustre: lustre-MDT0000: Migrate inode quota from old admin quota file(admin_quotafile_v2.usr) to &lt;span class=&quot;code-keyword&quot;&gt;new&lt;/span&gt; IAM quota index([0x200000003:0x8:0x0]).
Lustre: Skipped 2 previous similar messages
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;It says MDT is trying to migrate inode user quota into fid &lt;span class=&quot;error&quot;&gt;&amp;#91;0x200000003:0x8:0x0&amp;#93;&lt;/span&gt;, which isn&apos;t a quota global index fid. I can&apos;t see why this could happen from the code, and I can&apos;t reproduce it locally neither.&lt;/p&gt;

&lt;p&gt;Sarah, could you show me how did you reproduce it? If it&apos;s reproduceable, could you capature the log with DQUOTA &amp;amp; D_TRACE enabled for the MDT startup procesure only? (start mdt on the old 1.8 device) The startup log was truncated in your attached logs. Thanks in advance.&lt;/p&gt;</comment>
                            <comment id="51685" author="niu" created="Mon, 4 Feb 2013 01:30:48 +0000"  >&lt;p&gt;I see, those message should come from the global index copy of the quota slave on MDT, migration should not apply to those global index copy. The failure of &quot;qmt_glb_write()) $$$ failed to update global index, rc:-5&quot; could probably caused by the race of migration with usual global index copy update. I&apos;ll post a pach to fix this.&lt;/p&gt;</comment>
                            <comment id="51687" author="niu" created="Mon, 4 Feb 2013 02:15:32 +0000"  >&lt;p&gt;don&apos;t apply migration on global index copy: &lt;a href=&quot;http://review.whamcloud.com/5259&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5259&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actually, I still don&apos;t quite sure the reason of qmt_glb_write() failed, but at least, we shound&apos;t do migration on the global index copy.&lt;/p&gt;</comment>
                            <comment id="51861" author="niu" created="Wed, 6 Feb 2013 09:19:49 +0000"  >&lt;p&gt;I can reproduce the original problem in my local environment now, seems like something wrong in IAM when upgrading from 1.8 to 2.4 (2.1 -&amp;gt; 2.4 is fine), will look into it closer.&lt;/p&gt;</comment>
                            <comment id="51945" author="niu" created="Thu, 7 Feb 2013 02:05:51 +0000"  >&lt;p&gt;My test shows the global index truncation before the migration will lead to the IAM error, to not block other 1.8 upgrading tests, I&apos;ve posted a temporary fix (skip the index truncation during migration) for it. &lt;a href=&quot;http://review.whamcloud.com/5292&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5292&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sarah, could you try if above patch works for you too? Thanks.&lt;/p&gt;</comment>
                            <comment id="51948" author="sarah" created="Thu, 7 Feb 2013 02:24:50 +0000"  >&lt;p&gt;Sure, will get back to you when I have the result&lt;/p&gt;</comment>
                            <comment id="52027" author="niu" created="Fri, 8 Feb 2013 03:56:45 +0000"  >&lt;p&gt;Well, I realize that the orignal iam index truncation is not quite right, the iam container wasn&apos;t reinitialized after truncation. I&apos;ve update the patch 5292, the new patch works for me, Sarah could you verify if it fix your problem? Thanks.&lt;/p&gt;</comment>
                            <comment id="52061" author="adilger" created="Fri, 8 Feb 2013 14:40:07 +0000"  >&lt;p&gt;Niu, can the &lt;a href=&quot;http://review.whamcloud.com/5259&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5259&lt;/a&gt; and &lt;a href=&quot;http://review.whamcloud.com/5292&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5292&lt;/a&gt; patches be landed regardless of interop testing, or should they wait for Sarah to do manual testing?&lt;/p&gt;

&lt;p&gt;Getting &lt;a href=&quot;http://bugs.whamcloud.com/browse/LU-2688&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://bugs.whamcloud.com/browse/LU-2688&lt;/a&gt; (quota in conf-sanity.sh test_32) landed would help avoid this problem in the future.&lt;/p&gt;</comment>
                            <comment id="52090" author="sarah" created="Sat, 9 Feb 2013 03:39:08 +0000"  >&lt;p&gt;Niu,&lt;/p&gt;

&lt;p&gt;When I trying to upgrade from 1.8.8 to lustre-reviews/build #13046 which contains your fix, It hit following errors when extracting a kernel tar ball. That means the test failed even before it ran IOZONE which cause the original failure.&lt;/p&gt;

&lt;p&gt;client console:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;===== Verify the data =======================================================
Lustre: DEBUG MARKER: ===== Verify the data =======================================================
Verifying the extracted kernel tarball...
LustreError: 20678:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115205255725057/0 (ffff810312a3dce0) ns: lustre-MDT0000-mdc-ffff810324d9ac00 lock: ffff810325f31a00/0x77e8acf10368301e lrc: 2/0,0 mode: PR/PR res: 8589935616/1 bits 0x13 rrc: 2 type: IBT flags: 0x2090 remote: 0x1a32fabd9561d955 expref: -99 pid: 20678 timeout: 0
+ runas -u quota_usr tar xjf /mnt/lustre/d0.upgrade-downgrade/quota_usr/linux-2.6.18-238.12.1.el5.tar.bz2 -C /mnt/lustre/d0.upgrade-downgrade/quota_usr.new
LustreError: 20682:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115205255725057/0 (ffff81030c6b05a0) ns: lustre-MDT0000-mdc-ffff810324d9ac00 lock: ffff810306ac8600/0x77e8acf10368305d lrc: 2/0,0 mode: PR/PR res: 8589935616/1 bits 0x13 rrc: 2 type: IBT flags: 0x2090 remote: 0x1a32fabd9561d9f6 expref: -99 pid: 20682 timeout: 0
LustreError: 20682:0:(namei.c:256:ll_mdc_blocking_ast()) ### data mismatch with ino 144115205255731243/0 (ffff8103034ffd60) ns: lustre-MDT0000-mdc-ffff810324d9ac00 lock: ffff81030e8ee800/0x77e8acf10368e2eb lrc: 2/0,0 mode: PR/PR res: 8589935616/6187 bits 0x13 rrc: 2 type: IBT flags: 0x2090 remote: 0x1a32fabd9564217a expref: -99 pid: 20682 timeout: 0

 upgrade-downgrade : @@@@@@ FAIL: runas -u quota_usr tar xjf /mnt/lustre/d0.upgrade-downgrade/quota_usr/linux-2.6.18-238.12.1.el5.tar.bz2 -C /mnt/lustre/d0.upgrade-downgrade/quota_usr.new failed 
Lustre: DEBUG MARKER: upgrade-downgrade : @@@@@@ FAIL: runas -u quota_usr tar xjf /mnt/lustre/d0.upgrade-downgrade/quota_usr/linux-2.6.18-238.12.1.el5.tar.bz2 -C /mnt/lustre/d0.upgrade-downgrade/quota_usr.new failed
Dumping lctl log to /tmp/test_logs/1360393563/upgrade-downgrade..*.1360398341.log
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;OST console:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@fat-amd-3 ~]# Lustre: DEBUG MARKER: upgrade-downgrade : @@@@@@ FAIL: runas -u quota_usr tar xjf /mnt/lustre/d0.upgrade-downgrade/quota_usr/linux-2.6.18-238.12.1.el5.tar.bz2 -C /mnt/lustre/d0.upgrade-downgrade/quota_usr.new failed
LNet: Service thread pid 7926 was inactive for 40.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 7926, comm: ll_ost01_001

Call Trace:
 [&amp;lt;ffffffff814ead12&amp;gt;] schedule_timeout+0x192/0x2e0
 [&amp;lt;ffffffff8107cb50&amp;gt;] ? process_timeout+0x0/0x10
 [&amp;lt;ffffffffa037c6d1&amp;gt;] cfs_waitq_timedwait+0x11/0x20 [libcfs]
 [&amp;lt;ffffffffa064924d&amp;gt;] ldlm_completion_ast+0x4ed/0x960 [ptlrpc]
 [&amp;lt;ffffffffa0644970&amp;gt;] ? ldlm_expired_completion_wait+0x0/0x390 [ptlrpc]
 [&amp;lt;ffffffff8105fa40&amp;gt;] ? default_wake_function+0x0/0x20
 [&amp;lt;ffffffffa0648988&amp;gt;] ldlm_cli_enqueue_local+0x1f8/0x5d0 [ptlrpc]
 [&amp;lt;ffffffffa0648d60&amp;gt;] ? ldlm_completion_ast+0x0/0x960 [ptlrpc]
 [&amp;lt;ffffffffa0647700&amp;gt;] ? ldlm_blocking_ast+0x0/0x180 [ptlrpc]
 [&amp;lt;ffffffffa0d38740&amp;gt;] ofd_destroy_by_fid+0x160/0x380 [ofd]
 [&amp;lt;ffffffffa0647700&amp;gt;] ? ldlm_blocking_ast+0x0/0x180 [ptlrpc]
 [&amp;lt;ffffffffa0648d60&amp;gt;] ? ldlm_completion_ast+0x0/0x960 [ptlrpc]
 [&amp;lt;ffffffffa0670c15&amp;gt;] ? lustre_msg_buf+0x55/0x60 [ptlrpc]
 [&amp;lt;ffffffffa0d39c67&amp;gt;] ofd_destroy+0x187/0x670 [ofd]
 [&amp;lt;ffffffffa038c2e1&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
 [&amp;lt;ffffffffa0d11731&amp;gt;] ost_handle+0x38f1/0x46f0 [ost]
 [&amp;lt;ffffffffa0388154&amp;gt;] ? libcfs_id2str+0x74/0xb0 [libcfs]
 [&amp;lt;ffffffffa0681c7c&amp;gt;] ptlrpc_server_handle_request+0x41c/0xdf0 [ptlrpc]
 [&amp;lt;ffffffffa037c5de&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
 [&amp;lt;ffffffffa06793a9&amp;gt;] ? ptlrpc_wait_event+0xa9/0x290 [ptlrpc]
 [&amp;lt;ffffffffa038c2e1&amp;gt;] ? libcfs_debug_msg+0x41/0x50 [libcfs]
 [&amp;lt;ffffffff81052223&amp;gt;] ? __wake_up+0x53/0x70
 [&amp;lt;ffffffffa06831c6&amp;gt;] ptlrpc_main+0xb76/0x1870 [ptlrpc]
 [&amp;lt;ffffffffa0682650&amp;gt;] ? ptlrpc_main+0x0/0x1870 [ptlrpc]
 [&amp;lt;ffffffff8100c0ca&amp;gt;] child_rip+0xa/0x20
 [&amp;lt;ffffffffa0682650&amp;gt;] ? ptlrpc_main+0x0/0x1870 [ptlrpc]
 [&amp;lt;ffffffffa0682650&amp;gt;] ? ptlrpc_main+0x0/0x1870 [ptlrpc]
 [&amp;lt;ffffffff8100c0c0&amp;gt;] ? child_rip+0x0/0x20
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="52518" author="niu" created="Fri, 15 Feb 2013 21:37:52 +0000"  >&lt;p&gt;Andreas, yes I think 5259 and 5292 should be landed regardless of interop testing, otherwise, 5293 (fix of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2688&quot; title=&quot;add quota upgrade checks to conf_sanity.sh test_32&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2688&quot;&gt;&lt;del&gt;LU-2688&lt;/del&gt;&lt;/a&gt;) will not pass, then land the fix of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2688&quot; title=&quot;add quota upgrade checks to conf_sanity.sh test_32&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2688&quot;&gt;&lt;del&gt;LU-2688&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="52519" author="niu" created="Fri, 15 Feb 2013 21:49:11 +0000"  >&lt;p&gt;Sarah, this looks like another issue (not related to quota), could you try if your interop tests can pass even without quota enforced? If not, I think we&apos;d open another ticket with mor detailed issue description.&lt;/p&gt;</comment>
                            <comment id="53075" author="sarah" created="Wed, 27 Feb 2013 00:25:23 +0000"  >&lt;p&gt;I think the error I hit commented on 09/Feb/13 may be &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1488&quot; title=&quot;2.1.2 servers, 1.8.8 clients _mdc_blocking_ast()) ### data mismatch with ino&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1488&quot;&gt;&lt;del&gt;LU-1488&lt;/del&gt;&lt;/a&gt; which should be fixed in lustre-1.8.9. I will run the test again upgrade from 1.8.9 to master to see if this still happens.&lt;/p&gt;</comment>
                            <comment id="53087" author="pjones" created="Wed, 27 Feb 2013 03:05:17 +0000"  >&lt;p&gt;Landed for 2.4&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="17321">LU-2688</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="12129" name="debug_amd-1.tar.gz" size="1972394" author="sarah" created="Thu, 3 Jan 2013 01:11:05 +0000"/>
                            <attachment id="12130" name="dmesg" size="66778" author="sarah" created="Thu, 3 Jan 2013 01:11:05 +0000"/>
                            <attachment id="12144" name="upgrade-downgrade..debug_log.fat-amd-1.1357602330.log" size="124224" author="sarah" created="Mon, 7 Jan 2013 19:05:54 +0000"/>
                            <attachment id="12145" name="upgrade-downgrade..dmesg.fat-amd-1.1357602330.log" size="66422" author="sarah" created="Mon, 7 Jan 2013 19:05:54 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvedz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5972</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>