<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:31:18 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3139] osp_precreate_send()) ASSERTION( lu_fid_diff(fid, &amp;d-&gt;opd_pre_used_fid) &gt; 0 ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-3139</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;When starting lustre on Sequoia&apos;s MDS/MGS, it is hitting the following assertion:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2013-04-09 16:46:16 Lustre: lsv-MDT0000: Will be in recovery for at least 5:00, or until 2 clients reconnect.
2013-04-09 16:46:19 Lustre: lsv-MDT0000: Recovery over after 0:03, of 2 clients 2 recovered and 0 were evicted.
2013-04-09 16:46:58 LustreError: 11-0: lsv-OST000c-osc-MDT0000: Communicating with 172.20.20.12@o2ib500, operation ost_connect failed with -16.
2013-04-09 16:47:38 LustreError: 11-0: lsv-OST000b-osc-MDT0000: Communicating with 172.20.20.11@o2ib500, operation ost_connect failed with -16.
2013-04-09 16:47:38 LustreError: Skipped 9 previous similar messages
2013-04-09 16:48:03 LustreError: 11-0: lsv-OST0007-osc-MDT0000: Communicating with 172.20.20.7@o2ib500, operation ost_connect failed with -16.
2013-04-09 16:48:03 LustreError: Skipped 9 previous similar messages
2013-04-09 16:48:24 Lustre: lsv-OST0001-osc-MDT0000: Connection restored to lsv-OST0001 (at 172.20.20.1@o2ib500)
2013-04-09 16:48:24 Lustre: lsv-OST0003-osc-MDT0000: Connection restored to lsv-OST0003 (at 172.20.20.3@o2ib500)
2013-04-09 16:49:44 LustreError: 18017:0:(osp_precreate.c:496:osp_precreate_send()) ASSERTION( lu_fid_diff(fid, &amp;amp;d-&amp;gt;opd_pre_used_fid) &amp;gt; 0 ) failed: reply fid [0x100090000:0x4c00:0x0] pre used fid [0x100090000:0x16bec0:0x0]
2013-04-09 16:49:44 LustreError: 18017:0:(osp_precreate.c:496:osp_precreate_send()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This is an x86_64 server with ppc64 clients.  Lustre versions 2.3.63-3chaos and 2.3.63-4chaos.&lt;/p&gt;

&lt;p&gt;Seeing some vague similarity with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2895&quot; title=&quot;recovery-small 24a: osp_precreate_get_fid(): ASSERTION( lu_fid_diff(&amp;amp;d-&amp;gt;opd_pre_used_fid, &amp;amp;d-&amp;gt;opd_pre_last_created_fid) &amp;lt; 0 ) failed: next fid [0x0:0x1:0x0] last created fid [0x0:0x1:0x0]&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2895&quot;&gt;&lt;del&gt;LU-2895&lt;/del&gt;&lt;/a&gt;, we applited the patch from that issue with no improvement.  But this assertion is in a different function so not necessarily surprising.&lt;/p&gt;</description>
                <environment></environment>
        <key id="18310">LU-3139</key>
            <summary>osp_precreate_send()) ASSERTION( lu_fid_diff(fid, &amp;d-&gt;opd_pre_used_fid) &gt; 0 ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="morrone">Christopher Morrone</reporter>
                        <labels>
                            <label>LB</label>
                            <label>sequoia</label>
                            <label>topsequoia</label>
                    </labels>
                <created>Wed, 10 Apr 2013 00:06:31 +0000</created>
                <updated>Thu, 24 Apr 2014 19:17:07 +0000</updated>
                            <resolved>Mon, 22 Apr 2013 01:50:59 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="55994" author="pjones" created="Wed, 10 Apr 2013 11:58:25 +0000"  >&lt;p&gt;Niu&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="56021" author="morrone" created="Wed, 10 Apr 2013 17:09:12 +0000"  >&lt;p&gt;Any suggestions?  Sequoia&apos;s filesystem can&apos;t be started currently.  We need a quick solution.&lt;/p&gt;</comment>
                            <comment id="56022" author="adilger" created="Wed, 10 Apr 2013 17:15:53 +0000"  >&lt;p&gt;Alex is looking into this.&lt;/p&gt;</comment>
                            <comment id="56023" author="bzzz" created="Wed, 10 Apr 2013 17:21:09 +0000"  >&lt;p&gt;Christopher, would it be possible to start with full debug and attach the log here, please?&lt;/p&gt;</comment>
                            <comment id="56046" author="nedbass" created="Wed, 10 Apr 2013 19:45:38 +0000"  >&lt;p&gt;I uploaded ftp.whamcloud.com/uploads/&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3139&quot; title=&quot;osp_precreate_send()) ASSERTION( lu_fid_diff(fid, &amp;amp;d-&amp;gt;opd_pre_used_fid) &amp;gt; 0 ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3139&quot;&gt;&lt;del&gt;LU-3139&lt;/del&gt;&lt;/a&gt;.tar.gz which contains several debug logs from the MDS.  We had panic_on_lbug disabled and hit the assertion multiple times.&lt;/p&gt;</comment>
                            <comment id="56049" author="nedbass" created="Wed, 10 Apr 2013 21:16:05 +0000"  >&lt;p&gt;We see some suspicious-looking Lustre messages related to object precreation  on the OSTs.  When running 2.3.63, the OSTs all log messages like these:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Apr  9 16:50:06 vesta3 kernel: Lustre: lsv-OST0003: Slow creates, 2304/1482096 objects created at a rate of 46/s
Apr  9 16:50:06 vesta1 kernel: Lustre: lsv-OST0001: Slow creates, 2176/1474192 objects created at a rate of 43/s
Apr  9 16:50:06 vesta4 kernel: Lustre: lsv-OST0004: Slow creates, 2304/1478224 objects created at a rate of 46/s
Apr  9 16:50:08 vesta2 kernel: Lustre: lsv-OST0002: Slow creates, 2304/1472688 objects created at a rate of 46/s
Apr  9 16:50:10 vesta7 kernel: Lustre: lsv-OST0007: Slow creates, 2048/1482448 objects created at a rate of 40/s
Apr  9 16:50:12 vesta6 kernel: Lustre: lsv-OST0006: Slow creates, 2176/1482448 objects created at a rate of 43/s
Apr  9 16:50:12 vesta5 kernel: Lustre: lsv-OST0005: Slow creates, 2048/1482320 objects created at a rate of 40/s
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Which makes us wonder why they seem to be trying to precreate ~1.5 million objects. Perhaps the FID of the last created object is not being honored, causing it to restart from zero?&lt;/p&gt;

&lt;p&gt;Perhaps related, the OSTs also seem to be requesting very large grants:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Apr  9 16:49:14 vesta3 kernel: LustreError: 5369:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0003: client lsv-OST0003_UUID/ffff880809acf000 requesting &amp;gt; 2GB grant 3035332608
Apr  9 16:49:15 vesta1 kernel: LustreError: 5386:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0001: client lsv-OST0001_UUID/ffff8807f6423400 requesting &amp;gt; 2GB grant 3019145216
Apr  9 16:49:15 vesta4 kernel: LustreError: 5350:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0004: client lsv-OST0004_UUID/ffff8807f90d1800 requesting &amp;gt; 2GB grant 3027402752
Apr  9 16:49:16 vesta2 kernel: LustreError: 5375:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0002: client lsv-OST0002_UUID/ffff8807f8a9ac00 requesting &amp;gt; 2GB grant 3016065024
Apr  9 16:49:18 vesta7 kernel: LustreError: 5496:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0007: client lsv-OST0007_UUID/ffff8807f7d64800 requesting &amp;gt; 2GB grant 3036053504
Apr  9 16:49:21 vesta6 kernel: LustreError: 5487:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0006: client lsv-OST0006_UUID/ffff8807f7404c00 requesting &amp;gt; 2GB grant 3036053504
Apr  9 16:49:21 vesta5 kernel: LustreError: 5557:0:(ofd_grant.c:605:ofd_grant()) lsv-OST0005: client lsv-OST0005_UUID/ffff880808ccf800 requesting &amp;gt; 2GB grant 3035791360
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally, when we bring the OSTs back up running the old Lustre version 2.3.58, we see messages like these.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Apr 10 12:50:42 vesta44 kernel: Lustre: lsv-OST002c: deleting orphan objects from 1490242 to 1490307
Apr 10 12:50:42 vesta43 kernel: Lustre: lsv-OST002b: deleting orphan objects from 1489633 to 1489699
Apr 10 12:50:42 vesta72 kernel: Lustre: lsv-OST0048: deleting orphan objects from 1490721 to 1490787
Apr 10 12:50:42 vesta29 kernel: Lustre: lsv-OST001d: deleting orphan objects from 1490592 to 1490659
Apr 10 12:50:42 vesta78 kernel: Lustre: lsv-OST004e: deleting orphan objects from 1490080 to 1490147
Apr 10 12:50:42 vesta41 kernel: Lustre: lsv-OST0029: deleting orphan objects from 1490401 to 1490467
Apr 10 12:50:42 vesta25 kernel: Lustre: lsv-OST0019: deleting orphan objects from 1490337 to 1490403
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Note that the ranges roughly coincide with the upper bound of the &apos;Slow create&apos; messages quoted above.&lt;/p&gt;</comment>
                            <comment id="56055" author="nedbass" created="Thu, 11 Apr 2013 01:15:32 +0000"  >&lt;p&gt;I wonder if this is related to the changes for fid on OST. In particular I am looking at &lt;a href=&quot;http://review.whamcloud.com/#change,4324&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4324&lt;/a&gt; which seems to have changed the on-disk representation of the last-used object id on the OST.&lt;/p&gt;</comment>
                            <comment id="56063" author="niu" created="Thu, 11 Apr 2013 04:32:43 +0000"  >&lt;p&gt;Hi, Ned, the fs is upgraded from 2.3.58? Then I agree with you that the changes of fid-on-ost could be the culprit. It&apos;ll be helpful if there are full logs on OST.&lt;/p&gt;</comment>
                            <comment id="56064" author="di.wang" created="Thu, 11 Apr 2013 04:53:53 +0000"  >&lt;p&gt;Hmm, with patch 4324, it will use &lt;/p&gt;
{seq, 0, 0}
&lt;p&gt; as the fid of last_rcvd, I guess original OSD-ZFS does not map this zero oid FID to the last_rcvd file. We probably need another special handling here.  Alex, Could you please confirm?&lt;/p&gt;</comment>
                            <comment id="56068" author="niu" created="Thu, 11 Apr 2013 06:50:12 +0000"  >&lt;p&gt;If the systemm is upgraded from 2.3.58 to 2.3.63, I think the LASSERT was probably triggered as following:&lt;/p&gt;

&lt;p&gt;1.MDT read the correct last_used_fid 1490242 (or some very large number) from lov_objid, and use it to do orhpan cleanup;&lt;/p&gt;

&lt;p&gt;2.On OST side, OST got the incorrect last_oid 0 from disk (because of fid-on-ost changes, it failed to locate old last_rcvd), so orhpan cleanup will try to recreate million objects;&lt;/p&gt;

&lt;p&gt;3.The million objects re-creation breaks in the middle (see the &quot;Slow creates..&quot; message), and returned MDT with the created number 2304 (or some small value);&lt;/p&gt;

&lt;p&gt;4.MDT found that returned last_fid is smaller than current last_used_fid, so still keep using the last_used_fid (the larger one) to do precreate;&lt;/p&gt;

&lt;p&gt;5.The current last_oid is still a very small value, OST start million pre-creation again, and it should also break in the middle (&quot;Slow creates...&quot;), and return the last created fid (0x4c00 or something similar);&lt;/p&gt;

&lt;p&gt;6.The assert triggered on MDT, because the last fid returned from OST is still much smaller than opd_pre_used_fid (the correct one).&lt;/p&gt;
</comment>
                            <comment id="56073" author="bzzz" created="Thu, 11 Apr 2013 08:30:03 +0000"  >&lt;p&gt;Niu&apos;s description seem to be correct.. and we can do something like:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (zap lookup in OI failed) {
  &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (fid_idif() &amp;amp;&amp;amp; seq==FID_SEQ_OST_MDT0 &amp;amp;&amp;amp; oid==0)
    lookup {FID_SEQ_LOCAL_FILE; OFD_GROUP0_LAST_OID} in OI
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;though all the numbers/names above should be checked twice..&lt;/p&gt;

&lt;p&gt;the ideal &quot;solution&quot; would be to reformat, but I&apos;m not sure this is possible.&lt;/p&gt;


&lt;p&gt;for a reference, we were using:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;          lu_local_obj_fid(&amp;amp;info-&amp;gt;fti_fid, OFD_GROUP0_LAST_OID + group);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;in OFD to access last_id for group0, where&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;span class=&quot;code-keyword&quot;&gt;static&lt;/span&gt; inline void lu_local_obj_fid(struct lu_fid *fid, __u32 oid)
 230 {
 231         fid-&amp;gt;f_seq = FID_SEQ_LOCAL_FILE;
 232         fid-&amp;gt;f_oid = oid;

then
 221         OFD_GROUP0_LAST_OID     = 20UL,
 422         FID_SEQ_LOCAL_FILE = 0x200000001ULL,
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;so, it should be &lt;/p&gt;
{FID_SEQ_LOCAL_FILE; 20}
&lt;p&gt; as OFD_GROUP0_LAST_OID is not defined now.&lt;/p&gt;</comment>
                            <comment id="56074" author="bzzz" created="Thu, 11 Apr 2013 08:42:03 +0000"  >&lt;p&gt;hmm, this is not quite right as new object to track last_id with oid=0 has been created already.. I guess instead we should lookup OFD_GROUP0_LAST_OID first if osd_fid_lookup() is called for &lt;/p&gt;
{FID_SEQ_OST_MDT0; 0}
&lt;p&gt; ?&lt;/p&gt;

&lt;p&gt;another (hopefully not fatal) issue is about lots of orphans we just created. luckily, creation rate wasn&apos;t great..&lt;/p&gt;
</comment>
                            <comment id="56080" author="niu" created="Thu, 11 Apr 2013 10:18:46 +0000"  >&lt;blockquote&gt;
&lt;p&gt;hmm, this is not quite right as new object to track last_id with oid=0 has been created already.. I guess instead we should lookup OFD_GROUP0_LAST_OID first if osd_fid_lookup() is called for &lt;/p&gt;
&lt;div class=&quot;error&quot;&gt;&lt;span class=&quot;error&quot;&gt;Unknown macro: {FID_SEQ_OST_MDT0; 0}&lt;/span&gt; &lt;/div&gt;
&lt;p&gt; &lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Indeed. I&apos;m wondering if there are any tools for zfs which can copy the old LAST_ID into new &lt;/p&gt;
{seq, 0, 0}
&lt;p&gt;? then we could probably avoid above extra checking (I assume there isn&apos;t any other system in the world needs such checking). Of course, it would be great if the system is possible to be reformat.&lt;/p&gt;</comment>
                            <comment id="56081" author="bzzz" created="Thu, 11 Apr 2013 10:22:47 +0000"  >&lt;p&gt;AFAIU, yes, it &lt;b&gt;usually&lt;/b&gt; can be mounted with ZPL. but.. this may not work for the old filesystem as compatibility with ZPL was implemented just before the landing in September, iirc.&lt;/p&gt;</comment>
                            <comment id="56098" author="nedbass" created="Thu, 11 Apr 2013 15:35:32 +0000"  >&lt;p&gt;We can mount it with ZPL.  There is some strange behavior like . or .. missing or showing up twice, or incorrect hard link counts.  But we can read/write/open/close local objects like LAST_ID, last_rcvd, lov_objid, etc.&lt;/p&gt;</comment>
                            <comment id="56100" author="nedbass" created="Thu, 11 Apr 2013 15:49:25 +0000"  >&lt;p&gt;Alex, BTW this filesystem will not be long-lived due to the risk of these on-disk incompatibilities.  We will provide a newly-formatted filesystem for users that will coexist in the same zpools as this legacy one.  We just need to be able to mount the legacy one under 2.3.63+ long enough for users to migrate their data.&lt;/p&gt;</comment>
                            <comment id="56141" author="bzzz" created="Thu, 11 Apr 2013 20:38:08 +0000"  >&lt;p&gt;This is good news then. I&apos;d suggest to: take a snapshot for safety, then ..&lt;/p&gt;

&lt;p&gt;mount with ZPL, check file /oi.1/0x200000001:0x14:0x0 and check its content, it should be 8byte length and contain a number close to 0x16bec0 (last id used on MDS).&lt;/p&gt;

&lt;p&gt;the new file should be in /O/0/d0/0 - it should be 8byte too and the number much less than 0x16bec0, close to the first number you saw last in&lt;br/&gt;
a message like: Apr  9 16:50:12 vesta5 kernel: Lustre: lsv-OST0005: Slow creates, 2048/1482320 objects created at a rate of 40/s&lt;/p&gt;

&lt;p&gt;I think it should be enough to write the content from the old file (/oi.1/0x200000001:0x14:0x0) into the new one (/O/0/d0/0)&lt;/p&gt;

&lt;p&gt;Niu, Di, could you please confirm this suggestion is sane?&lt;/p&gt;</comment>
                            <comment id="56143" author="nedbass" created="Thu, 11 Apr 2013 20:53:03 +0000"  >&lt;p&gt;Alex, the oi.* directories are mostly empty through the ZPL.  /oi.1/0x200000001:0x14:0x0 doesn&apos;t exist , but /O/0/LAST_ID and /O/0/d0/0 are there with contents as you describe.  So we should use the LAST_ID file instead, correct?&lt;/p&gt;</comment>
                            <comment id="56149" author="di.wang" created="Thu, 11 Apr 2013 22:07:53 +0000"  >&lt;p&gt;Yes, LAST_ID should be used.&lt;/p&gt;</comment>
                            <comment id="56154" author="nedbass" created="Thu, 11 Apr 2013 23:09:17 +0000"  >&lt;p&gt;Okay, we&apos;ll schedule a time to try out this fix.  It will probably be sometime next week.&lt;/p&gt;</comment>
                            <comment id="56164" author="nedbass" created="Fri, 12 Apr 2013 00:15:25 +0000"  >&lt;p&gt;Niu, Alex, Di, can you think of any other on-disk format changes that may bite us after this one?  I don&apos;t want to get into a state where we can&apos;t mount the filesystem under any version of Lustre.&lt;/p&gt;</comment>
                            <comment id="56169" author="adilger" created="Fri, 12 Apr 2013 01:11:02 +0000"  >&lt;p&gt;Ned, there was a change with &lt;a href=&quot;http://review.whamcloud.com/5820&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5820&lt;/a&gt; (&lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2684&quot; title=&quot;convert ost_id to lu_fid for FID_SEQ_NORMAL objects&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2684&quot;&gt;&lt;del&gt;LU-2684&lt;/del&gt;&lt;/a&gt;) that affects the MDS FID storage in the LOV EA.  This &lt;em&gt;shouldn&apos;t&lt;/em&gt; affect normal Lustre operation, but there is a bit of churn in that code right now (e.g. &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3152&quot; title=&quot;test_27z did not consider fid on OST.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3152&quot;&gt;&lt;del&gt;LU-3152&lt;/del&gt;&lt;/a&gt;, &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2888&quot; title=&quot;After downgrade from 2.4 to 2.1.4, hit (osd_handler.c:2343:osd_index_try()) ASSERTION( dt_object_exists(dt) ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2888&quot;&gt;&lt;del&gt;LU-2888&lt;/del&gt;&lt;/a&gt;) that may affect upgraded filesystems and it would probably be better to wait until that issue is resolved.&lt;/p&gt;</comment>
                            <comment id="56341" author="nedbass" created="Mon, 15 Apr 2013 19:54:29 +0000"  >&lt;p&gt;We went ahead with the proposed workaround for one affected filesystem (lscratchv, used by vulcan). We were able to bring it up under Lustre 2.3.63 without hitting this bug.&lt;/p&gt;

&lt;p&gt;We will do the same for the legacy Sequoia filesystem (lscratch1) tomorrow.  Sequoia is already mounting a new filesystem formatted using Lustre 2.3.63, but we want to mount the old one read-only to allow data migration.  I&apos;ll report back on how it goes tomorrow.&lt;/p&gt;</comment>
                            <comment id="56358" author="pjones" created="Mon, 15 Apr 2013 23:25:40 +0000"  >&lt;p&gt;Thanks for the update Ned. I have dropped the priority slightly to reflect that this is still an important support issue but is not a general blocker for the release itself.&lt;/p&gt;</comment>
                            <comment id="56594" author="nedbass" created="Thu, 18 Apr 2013 23:23:48 +0000"  >&lt;p&gt;The update of the sequoia filesystem was successful.  If no work is planned for adding compatibility code, I think we can close this issue.&lt;/p&gt;</comment>
                            <comment id="56662" author="niu" created="Mon, 22 Apr 2013 01:50:59 +0000"  >&lt;p&gt;The sequoia system is fixed, and no plan to add compatibility code for 2.3.58 &amp;lt;-&amp;gt; 2.3.63.&lt;/p&gt;</comment>
                            <comment id="82424" author="simmonsja" created="Thu, 24 Apr 2014 18:22:43 +0000"  >&lt;p&gt;I just hit this same bug on 2.5.1 branch. This happened on a newly formated file system.&lt;/p&gt;</comment>
                            <comment id="82430" author="simmonsja" created="Thu, 24 Apr 2014 19:17:07 +0000"  >&lt;p&gt;Actually this is more likely &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-4653&quot; title=&quot;Hit LBUG ASSERTION( fid_seq(fid1) == fid_seq(fid2) ) failed after upgrade OST from 2.5.0 to 2.6&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-4653&quot;&gt;&lt;del&gt;LU-4653&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvnin:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7623</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>