<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:26:38 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-2606] lustre 2.4 don&apos;t able to start on 2.1 disks</title>
                <link>https://jira.whamcloud.com/browse/LU-2606</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;[  363.060205] LDISKFS-fs (loop0): mounted filesystem with ordered data mode. quota=off. Opts: &lt;br/&gt;
[  363.108765] Lustre: MGC192.168.69.5@tcp: Reactivating import&lt;br/&gt;
Jan 11 11:15:42 rhel6-64 kernel: [  363.108765] Lustre: MGC192.168.69.5@tcp: Reactivating import&lt;br/&gt;
[  364.599695] Lustre: lustre-MDT0000: used disk, loading&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  364.599695] Lustre: lustre-MDT0000: used disk, loading&lt;br/&gt;
[  364.611720] Lustre: 12041:0:(mdt_lproc.c:418:lprocfs_wr_identity_upcall()) lustre-MDT0000: identity upcall set to /Users/shadow/work/lustre/work/BUGS/MRP-509/lustre.13/lustre/utils/l_getident&lt;br/&gt;
ity&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  364.611720] Lustre: 12041:0:(mdt_lproc.c:418:lprocfs_wr_identity_upcall()) lustre-MDT0000: identity upcall set to /Users/shadow/work/lustre/work/BUGS/MRP-509/&lt;br/&gt;
lustre.13/lustre/utils/l_getidentity&lt;br/&gt;
[  364.643496] Lustre: lustre-MDT0000: Temporarily refusing client connection from 0@lo&lt;br/&gt;
[  364.647787] LustreError: 11-0: an error occurred while communicating with 0@lo. The mds_connect operation failed with -11&lt;br/&gt;
Jan 11 11:15:44 [  364.650814] Lustre: lustre-MDT0000: No usr space accounting support. Please consider running tunefs.lustre --quota on an unmounted filesystem to enable quota accounting.&lt;br/&gt;
rhel6-64 kernel: [  364.643496] [  364.656135] Lustre: lustre-MDT0000: No grp space accounting support. Please consider running tunefs.lustre --quota on an unmounted filesystem to enable quota a&lt;br/&gt;
ccounting.&lt;br/&gt;
Lustre: lustre-MDT0000: Temporarily refusing client connection from 0@lo&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  364.647787] LustreError: 11-0: an error occurred while communicating with 0@lo. The mds_connect operation failed with -11&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  364.650814] Lustre: lustre-MDT0000: No usr space accounting support. Please consider running tunefs.lustre --quota on an unmounted filesystem to enable quota &lt;br/&gt;
accounting.&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  364.656135] Lustre: lustre-MDT0000: No grp space accounting support. Please consider running tunefs.lustre --quota on an unmounted filesystem to enable quota &lt;br/&gt;
accounting.&lt;br/&gt;
[  364.875511] LDISKFS-fs (loop1): mounted filesystem with ordered data mode. quota=off. Opts: &lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  364.875511] LDISKFS-fs (loop1): mounted filesystem with ordered data mode. quota=off. Opts: &lt;br/&gt;
[  365.137956] LustreError: 12167:0:(ofd_fs.c:254:ofd_groups_init()) groups file is corrupted? size = 4&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.137956] LustreError: 12167:0:(ofd_fs.c:254:ofd_groups_init()) groups file is corrupted? size = 4&lt;br/&gt;
[  365.144003] LustreError: 12167:0:(obd_config.c:572:class_setup()) setup lustre-OST0000 failed (-5)&lt;br/&gt;
[  365.146446] LustreError: 12167:0:(obd_config.c:1546:class_config_llog_handler()) MGC192.168.69.5@tcp: cfg command failed: rc = -5&lt;br/&gt;
Jan 11 11:15:44 [  365.150286] Lustre:    cmd=cf003 0:lustre-OST0000  1:dev  2:0  3:f  &lt;br/&gt;
rhel6-64 kernel: [  365.144003] [  365.152806] LustreError: 15c-8: MGC192.168.69.5@tcp: The configuration from log &apos;lustre-OST0000&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.&lt;br/&gt;
LustreError: 12167:0:(obd_config.c:572:class_setup()) setup lust[  365.162154] LustreError: 12130:0:(obd_mount.c:1848:server_start_targets()) failed to start server lustre-OST0000: -5&lt;br/&gt;
re-OST0000 failed (-5)&lt;br/&gt;
Jan 11 11[  365.165806] LustreError: 12130:0:(obd_mount.c:2400:server_fill_super()) Unable to start targets: -5&lt;br/&gt;
:15:44 rhel6-64 kernel: [  365.1[  365.168976] LustreError: 12130:0:(obd_mount.c:1352:lustre_disconnect_osp()) Can&apos;t end config log lustre&lt;br/&gt;
46446] LustreError: 12167:0:(obd[  365.172533] LustreError: 12130:0:(obd_mount.c:2114:server_put_super()) lustre-OST0000: failed to disconnect osp-on-ost (rc=-2)!&lt;br/&gt;
_config.c:1546:class_config_llog[  365.177472] LustreError: 12130:0:(obd_config.c:619:class_cleanup()) Device 13 not setup&lt;br/&gt;
_handler()) MGC192.168.69.5@tcp:[  365.180974] LustreError: 12130:0:(obd_mount.c:1420:lustre_stop_osp()) Can not find osp-on-ost lustre-MDT0000-osp-OST0000&lt;br/&gt;
 cfg command failed: rc = -5&lt;br/&gt;
Jan[  365.184942] LustreError: 12130:0:(obd_mount.c:2159:server_put_super()) lustre-OST0000: Fail to stop osp-on-ost!&lt;br/&gt;
 11 11:15:44 rhel6-64 kernel: [  365.150286] Lustre:    cmd=cf00[  365.191390] Lustre: server umount lustre-OST0000 complete&lt;br/&gt;
3 0:lustre-OST00[  365.193673] LustreError: 12130:0:(obd_mount.c:2988:lustre_fill_super()) Unable to mount /dev/loop1 (-5)&lt;br/&gt;
00  1:dev  2:0  3:f  &lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.152806] LustreError: 15c-8: MGC192.168.69.5@tcp: The configuration from log &apos;lustre-OST0000&apos; failed (-5). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.162154] LustreError: 12130:0:(obd_mount.c:1848:server_start_targets()) failed to start server lustre-OST0000: -5&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.165806] LustreError: 12130:0:(obd_mount.c:2400:server_fill_super()) Unable to start targets: -5&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.168976] LustreError: 12130:0:(obd_mount.c:1352:lustre_disconnect_osp()) Can&apos;t end config log lustre&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.172533] LustreError: 12130:0:(obd_mount.c:2114:server_put_super()) lustre-OST0000: failed to disconnect osp-on-ost (rc=-2)!&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.177472] LustreError: 12130:0:(obd_config.c:619:class_cleanup()) Device 13 not setup&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.180974] LustreError: 12130:0:(obd_mount.c:1420:lustre_stop_osp()) Can not find osp-on-ost lustre-MDT0000-osp-OST0000&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.184942] LustreError: 12130:0:(obd_mount.c:2159:server_put_super()) lustre-OST0000: Fail to stop osp-on-ost!&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.191390] Lustre: server umount lustre-OST0000 complete&lt;br/&gt;
Jan 11 11:15:44 rhel6-64 kernel: [  365.193673] LustreError: 12130:0:(obd_mount.c:2988:lustre_fill_super()) Unable to mount /dev/loop1 (-5)&lt;/p&gt;</description>
                <environment></environment>
        <key id="17149">LU-2606</key>
            <summary>lustre 2.4 don&apos;t able to start on 2.1 disks</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="di.wang">Di Wang</assignee>
                                    <reporter username="shadow">Alexey Lyashkov</reporter>
                        <labels>
                            <label>HB</label>
                    </labels>
                <created>Fri, 11 Jan 2013 10:57:03 +0000</created>
                <updated>Mon, 27 Jul 2015 08:19:18 +0000</updated>
                            <resolved>Thu, 7 Feb 2013 10:40:54 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="50340" author="adilger" created="Fri, 11 Jan 2013 11:48:25 +0000"  >&lt;p&gt;I think that this problem will be fixed as soon as the next patches in the DNE series land, since the group file is no longer used?&lt;/p&gt;

&lt;p&gt;If that isn&apos;t the case, Di can you make the code able to accept a 4-byte file and just treat it as a __u32 instead of a __u64?&lt;/p&gt;</comment>
                            <comment id="50347" author="bzzz" created="Fri, 11 Jan 2013 13:02:15 +0000"  >&lt;p&gt;a bit unexpected because we do have a test for the case in conf-sanity.sh&lt;/p&gt;</comment>
                            <comment id="50356" author="di.wang" created="Fri, 11 Jan 2013 14:36:14 +0000"  >&lt;p&gt;Yes, we would not need group file anymore after that patch is landed. I just checked our disk2_1-ldiskfs.tar.bz2 (under tests), it seems &lt;br/&gt;
LAST_GROUPS size is zero, that is why conf-sanity.sh did not find out this error. &lt;/p&gt;


&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;root@testnode tests&amp;#93;&lt;/span&gt;# mount -t ldiskfs -o loop ./ost /mnt/mds1&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@testnode tests&amp;#93;&lt;/span&gt;# ls /mnt/mds1/&lt;br/&gt;
CONFIGS  health_check  LAST_GROUP  last_rcvd  lost+found  O&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@testnode tests&amp;#93;&lt;/span&gt;# ls /mnt/mds1/LAST_GROUP -l&lt;br/&gt;
&lt;del&gt;rwx&lt;/del&gt;----- 1 root root 0 Mar 14  2012 /mnt/mds1/LAST_GROUP&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@testnode tests&amp;#93;&lt;/span&gt;# od -x /mnt/mds1/LAST_GROUP &lt;br/&gt;
0000000&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;root@testnode tests&amp;#93;&lt;/span&gt;# stat /mnt/mds1/LAST_GROUP &lt;br/&gt;
  File: `/mnt/mds1/LAST_GROUP&apos;&lt;br/&gt;
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file&lt;br/&gt;
Device: 700h/1792d	Inode: 17          Links: 1&lt;br/&gt;
Access: (0700/&lt;del&gt;rwx&lt;/del&gt;-----)  Uid: (    0/    root)   Gid: (    0/    root)&lt;br/&gt;
Access: 2013-10-28 05:59:59.362696108 -0700&lt;br/&gt;
Modify: 2012-03-14 23:16:54.069859989 -0700&lt;br/&gt;
Change: 2012-03-14 23:16:54.069859989 -0700&lt;/p&gt;


&lt;p&gt;Apparently, we need to make a better disk image here.&lt;/p&gt;</comment>
                            <comment id="50437" author="adilger" created="Mon, 14 Jan 2013 15:19:18 +0000"  >&lt;p&gt;Di, could you please make a patch for this, and also fix the test image at the same time.&lt;/p&gt;</comment>
                            <comment id="51064" author="di.wang" created="Wed, 23 Jan 2013 18:46:49 +0000"  >&lt;p&gt;this patch &lt;a href=&quot;http://review.whamcloud.com/#change,4325&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4325&lt;/a&gt; (already merged) should fix this problem. I will update the test image later.&lt;/p&gt;</comment>
                            <comment id="51074" author="shadow" created="Thu, 24 Jan 2013 00:20:59 +0000"  >&lt;p&gt;I will retest today.&lt;/p&gt;</comment>
                            <comment id="51498" author="adilger" created="Wed, 30 Jan 2013 19:10:22 +0000"  >&lt;p&gt;Shadow, any update from your testing of the patch?&lt;/p&gt;</comment>
                            <comment id="51878" author="jlevi" created="Wed, 6 Feb 2013 12:55:08 +0000"  >&lt;p&gt;Are there any updates on the test of this patch?&lt;/p&gt;</comment>
                            <comment id="51972" author="adilger" created="Thu, 7 Feb 2013 10:40:54 +0000"  >&lt;p&gt;Closing bug per Di&apos;s comment that the fix has been merged.&lt;/p&gt;</comment>
                            <comment id="122238" author="gerrit" created="Mon, 27 Jul 2015 08:19:18 +0000"  >&lt;p&gt;Alexander Boyko (alexander.boyko@seagate.com) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/15731&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/15731&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2606&quot; title=&quot;lustre 2.4 don&amp;#39;t able to start on 2.1 disks&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2606&quot;&gt;&lt;del&gt;LU-2606&lt;/del&gt;&lt;/a&gt; osp: add procfs values for OST reserved size&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: a69eed85d902e2b15a960430e4652fbcc3c0bc33&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Related</name>
                                            <outwardlinks description="is related to ">
                                        <issuelink>
            <issuekey id="14615">LU-1445</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvf0v:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>6079</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>