<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:35:38 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3639] After downgrade from 2.5 to 2.3.0, hit (osd_handler.c:2720:osd_index_try()) ASSERTION( dt_object_exists(dt) ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-3639</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This is the same error described in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2888&quot; title=&quot;After downgrade from 2.4 to 2.1.4, hit (osd_handler.c:2343:osd_index_try()) ASSERTION( dt_object_exists(dt) ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2888&quot;&gt;&lt;del&gt;LU-2888&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MDS console:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LDISKFS-fs (sdb1): mounted filesystem with ordered data mode. quota=off. Opts: 
LDISKFS-fs (sdb1): mounted filesystem with ordered data mode. quota=off. Opts: 
LDISKFS-fs (sdb1): mounted filesystem with ordered data mode. quota=off. Opts: 
Lustre: MGC10.10.4.132@tcp: Reactivating import
Lustre: MGS: Logs for fs lustre were removed by user request.  All servers must be restarted in order to regenerate the logs.
Lustre: Setting parameter lustre-MDT0000-mdtlov.lov.stripesize in log lustre-MDT0000
Lustre: Setting parameter lustre-clilov.lov.stripesize in log lustre-client
LustreError: 8000:0:(osd_handler.c:2720:osd_index_try()) ASSERTION( dt_object_exists(dt) ) failed: 
LustreError: 8000:0:(osd_handler.c:2720:osd_index_try()) LBUG
Pid: 8000, comm: llog_process_th

Call Trace:
 [&amp;lt;ffffffffa0379905&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
 [&amp;lt;ffffffffa0379f17&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
 [&amp;lt;ffffffffa0e08735&amp;gt;] osd_index_try+0x175/0x620 [osd_ldiskfs]
 [&amp;lt;ffffffffa0842c08&amp;gt;] fld_index_init+0x88/0x4d0 [fld]
 [&amp;lt;ffffffffa084013d&amp;gt;] ? fld_cache_init+0x14d/0x430 [fld]
 [&amp;lt;ffffffffa083ba3e&amp;gt;] fld_server_init+0x29e/0x450 [fld]
 [&amp;lt;ffffffffa0d5c1b6&amp;gt;] mdt_fld_init+0x126/0x430 [mdt]
 [&amp;lt;ffffffffa0d61326&amp;gt;] mdt_init0+0x8c6/0x23f0 [mdt]
 [&amp;lt;ffffffffa0d5bf49&amp;gt;] ? mdt_key_init+0x59/0x1a0 [mdt]
 [&amp;lt;ffffffffa0d62f43&amp;gt;] mdt_device_alloc+0xf3/0x220 [mdt]
 [&amp;lt;ffffffffa04cb0d7&amp;gt;] obd_setup+0x1d7/0x2f0 [obdclass]
 [&amp;lt;ffffffffa04cb3f8&amp;gt;] class_setup+0x208/0x890 [obdclass]
 [&amp;lt;ffffffffa04d308c&amp;gt;] class_process_config+0xc0c/0x1c30 [obdclass]
 [&amp;lt;ffffffffa037abe0&amp;gt;] ? cfs_alloc+0x30/0x60 [libcfs]
 [&amp;lt;ffffffffa04cceb3&amp;gt;] ? lustre_cfg_new+0x353/0x7e0 [obdclass]
 [&amp;lt;ffffffffa04d515b&amp;gt;] class_config_llog_handler+0x9bb/0x1610 [obdclass]
 [&amp;lt;ffffffffa067530b&amp;gt;] ? llog_client_next_block+0x1db/0x4b0 [ptlrpc]
 [&amp;lt;ffffffffa049e1f8&amp;gt;] llog_process_thread+0x888/0xd00 [obdclass]
 [&amp;lt;ffffffffa049d970&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
 [&amp;lt;ffffffffa049d970&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffffa049d970&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20

Kernel panic - not syncing: LBUG
Pid: 8000, comm: llog_process_th Not tainted 2.6.32-279.5.1.el6_lustre.gb16fe80.x86_64 #1
Call Trace:
 [&amp;lt;ffffffff814fd58a&amp;gt;] ? panic+0xa0/0x168
 [&amp;lt;ffffffffa0379f6b&amp;gt;] ? lbug_with_loc+0x9b/0xb0 [libcfs]
 [&amp;lt;ffffffffa0e08735&amp;gt;] ? osd_index_try+0x175/0x620 [osd_ldiskfs]
 [&amp;lt;ffffffffa0842c08&amp;gt;] ? fld_index_init+0x88/0x4d0 [fld]
 [&amp;lt;ffffffffa084013d&amp;gt;] ? fld_cache_init+0x14d/0x430 [fld]
 [&amp;lt;ffffffffa083ba3e&amp;gt;] ? fld_server_init+0x29e/0x450 [fld]
 [&amp;lt;ffffffffa0d5c1b6&amp;gt;] ? mdt_fld_init+0x126/0x430 [mdt]
 [&amp;lt;ffffffffa0d61326&amp;gt;] ? mdt_init0+0x8c6/0x23f0 [mdt]
 [&amp;lt;ffffffffa0d5bf49&amp;gt;] ? mdt_key_init+0x59/0x1a0 [mdt]
 [&amp;lt;ffffffffa0d62f43&amp;gt;] ? mdt_device_alloc+0xf3/0x220 [mdt]
 [&amp;lt;ffffffffa04cb0d7&amp;gt;] ? obd_setup+0x1d7/0x2f0 [obdclass]
 [&amp;lt;ffffffffa04cb3f8&amp;gt;] ? class_setup+0x208/0x890 [obdclass]
 [&amp;lt;ffffffffa04d308c&amp;gt;] ? class_process_config+0xc0c/0x1c30 [obdclass]
 [&amp;lt;ffffffffa037abe0&amp;gt;] ? cfs_alloc+0x30/0x60 [libcfs]
 [&amp;lt;ffffffffa04cceb3&amp;gt;] ? lustre_cfg_new+0x353/0x7e0 [obdclass]
 [&amp;lt;ffffffffa04d515b&amp;gt;] ? class_config_llog_handler+0x9bb/0x1610 [obdclass]
 [&amp;lt;ffffffffa067530b&amp;gt;] ? llog_client_next_block+0x1db/0x4b0 [ptlrpc]
 [&amp;lt;ffffffffa049e1f8&amp;gt;] ? llog_process_thread+0x888/0xd00 [obdclass]
 [&amp;lt;ffffffffa049d970&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c14a&amp;gt;] ? child_rip+0xa/0x20
 [&amp;lt;ffffffffa049d970&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffffa049d970&amp;gt;] ? llog_process_thread+0x0/0xd00 [obdclass]
 [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>before upgrade, server and client: 2.3.0&lt;br/&gt;
after upgrade, server is 2.5, 2 clients are 2.5, 1 client is 2.3.0&lt;br/&gt;
after downgrade, server and client: 2.3.0</environment>
        <key id="20007">LU-3639</key>
            <summary>After downgrade from 2.5 to 2.3.0, hit (osd_handler.c:2720:osd_index_try()) ASSERTION( dt_object_exists(dt) ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="5" iconUrl="https://jira.whamcloud.com/images/icons/priorities/trivial.svg">Trivial</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="sarah">Sarah Liu</reporter>
                        <labels>
                    </labels>
                <created>Thu, 25 Jul 2013 20:10:51 +0000</created>
                <updated>Fri, 23 Oct 2015 07:42:41 +0000</updated>
                            <resolved>Fri, 23 Oct 2015 07:42:41 +0000</resolved>
                                    <version>Lustre 2.3.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>9</watches>
                                                                            <comments>
                            <comment id="63891" author="adilger" created="Thu, 8 Aug 2013 16:49:46 +0000"  >&lt;p&gt;Is this the 2.3.0 server or IEEL?&lt;/p&gt;</comment>
                            <comment id="64157" author="sarah" created="Tue, 13 Aug 2013 07:38:31 +0000"  >&lt;p&gt;Hi Andreas:&lt;br/&gt;
this is 2.3.0 server&lt;/p&gt;

&lt;p&gt;As Oleg suggested, I will rerun the test with IEEL server and see if it hit the same problem and will update this ticket when I have result.&lt;/p&gt;</comment>
                            <comment id="67408" author="adilger" created="Tue, 24 Sep 2013 17:57:17 +0000"  >&lt;p&gt;Sarah, any chance to run this test with IEEL?&lt;/p&gt;</comment>
                            <comment id="67844" author="sarah" created="Fri, 27 Sep 2013 17:40:46 +0000"  >&lt;p&gt;Hi Andreas,&lt;/p&gt;

&lt;p&gt;this is blocked by TEI-578&lt;/p&gt;</comment>
                            <comment id="71277" author="di.wang" created="Mon, 11 Nov 2013 23:12:05 +0000"  >&lt;p&gt;Hmm, it seems FLD object(as a special FID) is not being inserted properly in 2.5, i.e. &quot;fld&quot; is not being insert as a special name. So when downgrade to 2.3, it will use name(&quot;fld&quot;) to locate FLD, but since the name is not being insert, so it caused the LBUG. Hmm, I saw osd_oi_lookup ignore LOCAL seq, Fan Yong could you please comment?&lt;/p&gt;</comment>
                            <comment id="71285" author="yong.fan" created="Tue, 12 Nov 2013 00:51:15 +0000"  >&lt;p&gt;Hi Di, you mean that in Lustre-2.5, the local file FLD, its name &quot;fld&quot;, is not correctly inserted as special name, then when downgrade to Lustre-2.3, the old osd_oi_lookup() will try to lookup such special name, then failed?&lt;/p&gt;

&lt;p&gt;But in fact, in Lustre-2.5, we add the local objects both into OI tables and insert its special name as following:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;&lt;span class=&quot;code-object&quot;&gt;int&lt;/span&gt; osd_oi_insert(struct osd_thread_info *info, struct osd_device *osd,
                  &lt;span class=&quot;code-keyword&quot;&gt;const&lt;/span&gt; struct lu_fid *fid, &lt;span class=&quot;code-keyword&quot;&gt;const&lt;/span&gt; struct osd_inode_id *id,
                  handle_t *th, &lt;span class=&quot;code-keyword&quot;&gt;enum&lt;/span&gt; oi_check_flags flags)
{
&#8230;
        rc = osd_oi_iam_refresh(info, osd_fid2oi(osd, fid),
                               (&lt;span class=&quot;code-keyword&quot;&gt;const&lt;/span&gt; struct dt_rec *)oi_id,
                               (&lt;span class=&quot;code-keyword&quot;&gt;const&lt;/span&gt; struct dt_key *)oi_fid, th, &lt;span class=&quot;code-keyword&quot;&gt;true&lt;/span&gt;);
&#8230;
        &lt;span class=&quot;code-keyword&quot;&gt;if&lt;/span&gt; (unlikely(fid_seq(fid) == FID_SEQ_LOCAL_FILE))
                rc = osd_obj_spec_insert(info, osd, fid, id, th);
        &lt;span class=&quot;code-keyword&quot;&gt;return&lt;/span&gt; rc;
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here is the file one the ldiskfs partition created under Lustre-2.5:&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;# ls -a /mnt/mds1
./                O/                  changelog_catalog  lfsck_bookmark   lov_objseq  oi.16.12  oi.16.17  oi.16.21  oi.16.26  oi.16.30  oi.16.35  oi.16.4   oi.16.44  oi.16.49  oi.16.53  oi.16.58  oi.16.62  quota_master/
../               OI_scrub            changelog_users    lfsck_layout     oi.16.0     oi.16.13  oi.16.18  oi.16.22  oi.16.27  oi.16.31  oi.16.36  oi.16.40  oi.16.45  oi.16.5   oi.16.54  oi.16.59  oi.16.63  quota_slave/
CATALOGS          PENDING/            fld                lfsck_namespace  oi.16.1     oi.16.14  oi.16.19  oi.16.23  oi.16.28  oi.16.32  oi.16.37  oi.16.41  oi.16.46  oi.16.50  oi.16.55  oi.16.6   oi.16.7   seq_ctl
CONFIGS/          REMOTE_PARENT_DIR/  hsm_actions        lost+found/      oi.16.10    oi.16.15  oi.16.2   oi.16.24  oi.16.29  oi.16.33  oi.16.38  oi.16.42  oi.16.47  oi.16.51  oi.16.56  oi.16.60  oi.16.8   seq_srv
NIDTBL_VERSIONS/  ROOT/               last_rcvd          lov_objid        oi.16.11    oi.16.16  oi.16.20  oi.16.25  oi.16.3   oi.16.34  oi.16.39  oi.16.43  oi.16.48  oi.16.52  oi.16.57  oi.16.61  oi.16.9
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &quot;fld&quot; is there. I am not sure whether it is your expected or not.&lt;/p&gt;</comment>
                            <comment id="71299" author="di.wang" created="Tue, 12 Nov 2013 05:18:33 +0000"  >&lt;p&gt;oh, I mean osd_oi_lookup does not do special lookup for &lt;span class=&quot;error&quot;&gt;&amp;#91;FID_SEQ_LOCAL_FILE, FLD_INDEX_OID, 0&amp;#93;&lt;/span&gt;&quot;fld&quot;, and it might cause this problem.&lt;/p&gt;

&lt;p&gt;See this downgrade process, the filesystem &lt;br/&gt;
1. the FS is formatted in 2.3 first, fld is created with the name &quot;fld&quot;&lt;br/&gt;
2. then upgrade to 2.5, but in 2.5, it did not lookup &quot;fld&quot; in osd_oi_lookup, and instead creating a new one. (This is obviously wrong)&lt;br/&gt;
3. Then downgrade to 2.3, it can not find the old one. So I guess the fld might be deleted somehow because of 2.&lt;/p&gt;</comment>
                            <comment id="83264" author="yong.fan" created="Tue, 6 May 2014 06:14:22 +0000"  >&lt;p&gt;Here is one patch against b2_3 to resolve the issue:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/10224&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10224&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="85777" author="yong.fan" created="Thu, 5 Jun 2014 03:50:50 +0000"  >&lt;p&gt;Do we still maintain b2_3? If not, I will abandon the the patch &lt;a href=&quot;http://review.whamcloud.com/10224&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/10224&lt;/a&gt; and close the ticket.&lt;/p&gt;</comment>
                            <comment id="86397" author="yong.fan" created="Thu, 12 Jun 2014 02:08:32 +0000"  >&lt;p&gt;Downgrade the priority since we have no detailed plan to land more patches to b2_3 recently.&lt;/p&gt;</comment>
                            <comment id="131336" author="yong.fan" created="Fri, 23 Oct 2015 07:42:41 +0000"  >&lt;p&gt;Since we will not land more patches to b2_3, close it.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvw87:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9373</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>