<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:24:11 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2316] Upgrade from 1.8.8 -&gt; master, mount MDS failed: unknown parameter quota_type=ug3</title>
                <link>https://jira.whamcloud.com/browse/LU-2316</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After upgrade system from 1.8.8-wc1 to master, hit this error when trying to mount MDS: &lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LDISKFS-fs (sdb1): mounted filesystem with ordered data mode. quota=off. Opts: 
LDISKFS-fs (sdb1): mounted filesystem with ordered data mode. quota=off. Opts: 
Lustre: MGC10.10.4.132@tcp: Reactivating import
Lustre: MGS: Logs for fs lustre were removed by user request.  All servers must be restarted in order to regenerate the logs.
Lustre: Setting parameter lustre-MDT0000-mdtlov.lov.stripesize in log lustre-MDT0000
Lustre: Setting parameter lustre-clilov.lov.stripesize in log lustre-client
LustreError: 31273:0:(mgc_request.c:248:do_config_log_add()) failed processing sptlrpc log: -2
Lustre: lustre-MDT0000: used disk, loading
Lustre: Mounting lustre-MDT0000 at first time on 1.8 FS, remove all clients for interop needs
LustreError: 31347:0:(sec_config.c:1024:sptlrpc_target_local_copy_conf()) missing llog context
Lustre: lustre-MDT0000: Migrate inode quota from old admin quota file(admin_quotafile_v2.usr) to new IAM quota index([0x200000006:0x10000:0x0]).
Lustre: lustre-MDT0000: Migrate inode quota from old admin quota file(admin_quotafile_v2.grp) to new IAM quota index([0x200000006:0x1010000:0x0]).
Lustre: 31347:0:(mdt_handler.c:5192:mdt_process_config()) For 1.8 interoperability, skip this mdt.group_upcall. It is obsolete.
LustreError: 31347:0:(obd_config.c:1299:class_process_proc_param()) lustre-MDT0000: unknown param quota_type=ug3
LustreError: 31347:0:(obd_config.c:1546:class_config_llog_handler()) MGC10.10.4.132@tcp: cfg command failed: rc = -38
Lustre:    cmd=cf00f 0:lustre-MDT0000  1:mdd.quota_type=ug3  
LustreError: 15c-8: MGC10.10.4.132@tcp: The configuration from log &apos;lustre-MDT0000&apos; failed (-38). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
LustreError: 31273:0:(obd_mount.c:1850:server_start_targets()) failed to start server lustre-MDT0000: -38
LustreError: 31273:0:(obd_mount.c:2399:server_fill_super()) Unable to start targets: -38
LustreError: 31273:0:(obd_mount.c:1350:lustre_disconnect_osp()) Can&apos;t end config log lustre
LustreError: 31273:0:(obd_mount.c:2112:server_put_super()) lustre-MDT0000: failed to disconnect osp-on-ost (rc=-2)!
Lustre: Failing over lustre-MDT0000
LustreError: 31273:0:(obd_mount.c:1418:lustre_stop_osp()) Can not find osp-on-ost lustre-MDT0000-osp-MDT0000
LustreError: 31273:0:(obd_mount.c:2157:server_put_super()) lustre-MDT0000: Fail to stop osp-on-ost!
LustreError: 31273:0:(ldlm_request.c:1183:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway
LustreError: 31273:0:(ldlm_request.c:1815:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108
Lustre: 31273:0:(client.c:1912:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1352760135/real 1352760135]  req@ffff88011cba1c00 x1418471798734858/t0(0) o251-&amp;gt;MGC10.10.4.132@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1352760141 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-MDT0000 complete
LustreError: 31273:0:(obd_mount.c:2987:lustre_fill_super()) Unable to mount  (-38)
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: DEBUG MARKER: upgrade-downgrade : @@@@@@ FAIL: NAME=ncli not mounted
LDISKFS-fs (sdb1): mounted filesystem with ordered data mode. quota=off. Opts: 
Lustre: MGC10.10.4.132@tcp: Reactivating import
Lustre: MGS: Logs for fs lustre were removed by user request.  All servers must be restarted in order to regenerate the logs.
Lustre: Setting parameter lustre-MDT0000-mdtlov.lov.stripesize in log lustre-MDT0000
Lustre: Skipped 4 previous similar messages
LustreError: 31689:0:(mgc_request.c:248:do_config_log_add()) failed processing sptlrpc log: -2
Lustre: lustre-MDT0000: used disk, loading
LustreError: 31757:0:(sec_config.c:1024:sptlrpc_target_local_copy_conf()) missing llog context
Lustre: 31757:0:(mdt_handler.c:5192:mdt_process_config()) For 1.8 interoperability, skip this mdt.group_upcall. It is obsolete.
LustreError: 31757:0:(obd_config.c:1299:class_process_proc_param()) lustre-MDT0000: unknown param quota_type=ug3
LustreError: 31757:0:(obd_config.c:1546:class_config_llog_handler()) MGC10.10.4.132@tcp: cfg command failed: rc = -38
Lustre:    cmd=cf00f 0:lustre-MDT0000  1:mdd.quota_type=ug3  
LustreError: 15c-8: MGC10.10.4.132@tcp: The configuration from log &apos;lustre-MDT0000&apos; failed (-38). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
LustreError: 31689:0:(obd_mount.c:1850:server_start_targets()) failed to start server lustre-MDT0000: -38
LustreError: 31689:0:(obd_mount.c:2399:server_fill_super()) Unable to start targets: -38
LustreError: 31689:0:(obd_mount.c:1350:lustre_disconnect_osp()) Can&apos;t end config log lustre
LustreError: 31689:0:(obd_mount.c:2112:server_put_super()) lustre-MDT0000: failed to disconnect osp-on-ost (rc=-2)!
Lustre: Failing over lustre-MDT0000
LustreError: 31689:0:(obd_mount.c:1418:lustre_stop_osp()) Can not find osp-on-ost lustre-MDT0000-osp-MDT0000
LustreError: 31689:0:(obd_mount.c:2157:server_put_super()) lustre-MDT0000: Fail to stop osp-on-ost!
LustreError: 31689:0:(ldlm_request.c:1183:ldlm_cli_cancel_req()) Got rc -108 from cancel RPC: canceling anyway
LustreError: 31689:0:(ldlm_request.c:1815:ldlm_cli_cancel_list()) ldlm_cli_cancel_list: -108
Lustre: 31689:0:(client.c:1912:ptlrpc_expire_one_request()) @@@ Request  sent has timed out for slow reply: [sent 1352760261/real 1352760261]  req@ffff88030e13d000 x1418471798734868/t0(0) o251-&amp;gt;MGC10.10.4.132@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1352760267 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
Lustre: server umount lustre-MDT0000 complete
LustreError: 31689:0:(obd_mount.c:2987:lustre_fill_super()) Unable to mount  (-38)
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: DEBUG MARKER: upgrade-downgrade : @@@@@@ FAIL: NAME=ncli not mounted
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>Before Upgrade:&lt;br/&gt;
server: 1.8.8-wc1 RHEL5&lt;br/&gt;
client: 1.8.8-wc1 RHEL5/RHEL6&lt;br/&gt;
&lt;br/&gt;
After Upgrade:&lt;br/&gt;
server: &lt;a href=&quot;http://review.whamcloud.com/#change,4509&quot;&gt;http://review.whamcloud.com/#change,4509&lt;/a&gt;&lt;br/&gt;
client: &lt;a href=&quot;http://review.whamcloud.com/#change,4509&quot;&gt;http://review.whamcloud.com/#change,4509&lt;/a&gt;</environment>
        <key id="16661">LU-2316</key>
            <summary>Upgrade from 1.8.8 -&gt; master, mount MDS failed: unknown parameter quota_type=ug3</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="sarah">Sarah Liu</reporter>
                        <labels>
                    </labels>
                <created>Tue, 13 Nov 2012 12:33:03 +0000</created>
                <updated>Fri, 16 Nov 2012 14:50:41 +0000</updated>
                            <resolved>Fri, 16 Nov 2012 14:50:41 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="47770" author="niu" created="Tue, 13 Nov 2012 22:37:20 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/4528&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4528&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sarah, could you try this patch? Thanks.&lt;/p&gt;</comment>
                            <comment id="47808" author="sarah" created="Wed, 14 Nov 2012 16:09:17 +0000"  >&lt;p&gt;Hi Niu, can you please add this patch to &lt;a href=&quot;http://review.whamcloud.com/#change,4509?&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4509?&lt;/a&gt; I think it will hit &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2310&quot; title=&quot;Kernel panic when trying to mount MDS after system upgrade from 1.8.8-wc1 to master.&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2310&quot;&gt;&lt;del&gt;LU-2310&lt;/del&gt;&lt;/a&gt; without that fix included&lt;/p&gt;</comment>
                            <comment id="47894" author="niu" created="Thu, 15 Nov 2012 21:03:44 +0000"  >&lt;p&gt;Sarah, both patches are landed, you can verify it with latest master build now. Thanks.&lt;/p&gt;</comment>
                            <comment id="47952" author="adilger" created="Fri, 16 Nov 2012 14:50:41 +0000"  >&lt;p&gt;Closing this bug, since both of the patches have landed.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvc5b:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5538</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>