<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:42:51 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4450] Not able to mount mdt device</title>
                <link>https://jira.whamcloud.com/browse/LU-4450</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I ran &lt;br/&gt;
tune2fs -O ^quota /dev/nbp7-vg/mdt7&lt;br/&gt;
tune2fs -O quota /dev/nbp7-vg/mdt7&lt;/p&gt;

&lt;p&gt;After which the mdt device is not mounting.&lt;/p&gt;

&lt;p&gt;nbp7-mds1 login: LDISKFS-fs (dm-1): recovery complete&lt;br/&gt;
LDISKFS-fs (dm-1): mounted filesystem with ordered data mode. quota=on. Opts: &lt;br/&gt;
LDISKFS-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended&lt;br/&gt;
LDISKFS-fs (dm-2): mounted filesystem with ordered data mode. quota=on. Opts: &lt;br/&gt;
LustreError: 137-5: nbp7-MDT0000_UUID: not available for connect from 10.151.51.135@o2ib (no target)&lt;br/&gt;
LustreError: 137-5: nbp7-MDT0000_UUID: not available for connect from 10.151.32.217@o2ib (no target)&lt;br/&gt;
LustreError: Skipped 3 previous similar messages&lt;br/&gt;
Lustre: nbp7-MDT0000: Not available for connect from 10.151.56.154@o2ib (not set up)&lt;br/&gt;
Lustre: nbp7-MDT0000: Not available for connect from 10.151.29.224@o2ib (not set up)&lt;br/&gt;
Lustre: Skipped 9 previous similar messages&lt;br/&gt;
Lustre: nbp7-MDT0000: used disk, loading&lt;br/&gt;
Lustre: 5049:0:(mdt_handler.c:4960:mdt_process_config()) For interoperability, skip this mdt.group_upcall. It is obsolete.&lt;br/&gt;
Lustre: 5049:0:(mdt_handler.c:4960:mdt_process_config()) For interoperability, skip this mdt.quota_type. It is obsolete.&lt;br/&gt;
LDISKFS-fs error (device dm-2): ldiskfs_mb_check_ondisk_bitmap: on-disk bitmap for group 20corrupted: 2107 blocks free in bitmap, 2105 - in gd&lt;/p&gt;

&lt;p&gt;Aborting journal on device dm-2-8.&lt;br/&gt;
LDISKFS-fs (dm-2): Remounting filesystem read-only&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: IO failure&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_free_blocks: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_reserve_inode_write: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_truncate: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_reserve_inode_write: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_orphan_del: Journal has aborted&lt;br/&gt;
LDISKFS-fs error (device dm-2) in ldiskfs_reserve_inode_write: Journal has aborted&lt;br/&gt;
LustreError: 5049:0:(llog.c:159:llog_cancel_rec()) nbp7-OST0020-osc-MDT0000: fail to write header for llog #0x2:1#00000000: rc = -30&lt;br/&gt;
LustreError: 4960:0:(osd_handler.c:738:osd_trans_commit_cb()) transaction @0xffff880ff9a32ac0 commit error: 2&lt;br/&gt;
LustreError: 5049:0:(osp_sync.c:1031:osp_sync_init()) nbp7-OST0020-osc-MDT0000: can&apos;t initialize llog: rc = -30&lt;br/&gt;
LustreError: 5049:0:(obd_config.c:572:class_setup()) setup nbp7-OST0020-osc-MDT0000 failed (-30)&lt;br/&gt;
LustreError: 5049:0:(obd_config.c:1550:class_config_llog_handler()) MGC10.151.27.38@o2ib: cfg command failed: rc = -30&lt;br/&gt;
Lustre:    cmd=cf003 0:nbp7-OST0020-osc-MDT0000  1:nbp7-OST0020_UUID  2:10.151.27.45@o2ib  &lt;br/&gt;
LustreError: 15c-8: MGC10.151.27.38@o2ib: The configuration from log &apos;nbp7-MDT0000&apos; failed (-30). This may be the result of communication errors between this node and the MGS, a b.&lt;br/&gt;
LustreError: 4950:0:(obd_mount_server.c:1253:server_start_targets()) failed to start server nbp7-MDT0000: -30&lt;br/&gt;
LustreError: 4950:0:(obd_mount_server.c:1695:server_fill_super()) Unable to start targets: -30&lt;br/&gt;
LustreError: 4950:0:(obd_mount_server.c:844:lustre_disconnect_lwp()) nbp7-MDT0000-lwp-MDT0000: Can&apos;t end config log nbp7-client.&lt;br/&gt;
LustreError: 4950:0:(obd_mount_server.c:1422:server_put_super()) nbp7-MDT0000: failed to disconnect lwp. (rc=-2)&lt;br/&gt;
Lustre: Failing over nbp7-MDT0000&lt;br/&gt;
Lustre: nbp7-MDT0000: Not available for connect from 10.151.43.177@o2ib (stopping)&lt;br/&gt;
Lustre: Skipped 151 previous similar messages&lt;br/&gt;
LustreError: 137-5: nbp7-MDT0000_UUID: not available for connect from 10.151.32.62@o2ib (no target)&lt;br/&gt;
LustreError: Skipped 81 previous similar messages&lt;br/&gt;
VFS: cannot write quota structure on device dm-2 (error -30). Quota may get out of sync!&lt;br/&gt;
VFS: cannot write quota structure on device dm-2 (error -30). Quota may get out of sync!&lt;br/&gt;
LDISKFS-fs error (device dm-2): ldiskfs_put_super: Couldn&apos;t clean up the journal&lt;br/&gt;
Lustre: server umount nbp7-MDT0000 complete&lt;/p&gt;

&lt;p&gt;Please advice next step.&lt;/p&gt;</description>
                <environment></environment>
        <key id="22654">LU-4450</key>
            <summary>Not able to mount mdt device</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="mhanafi">Mahmoud Hanafi</reporter>
                        <labels>
                    </labels>
                <created>Tue, 7 Jan 2014 17:52:01 +0000</created>
                <updated>Tue, 14 Jan 2014 02:14:28 +0000</updated>
                            <resolved>Tue, 14 Jan 2014 02:14:28 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="74486" author="mhanafi" created="Tue, 7 Jan 2014 18:10:08 +0000"  >&lt;p&gt;SHOULD WE GO HEAD AND RUN FSCK&lt;/p&gt;

&lt;p&gt;DRY RUN FSCK OUTPUT FOLLOWS&lt;br/&gt;
nbp7-mds1 ~ # e2fsck -v -n /dev/nbp7-vg/mdt7 &lt;br/&gt;
e2fsck 1.42.7.wc1 (12-Apr-2013)&lt;br/&gt;
Warning: skipping journal recovery because doing a read-only filesystem check.&lt;br/&gt;
nbp7-MDT0000 contains a file system with errors, check forced.&lt;br/&gt;
Pass 1: Checking inodes, blocks, and sizes&lt;br/&gt;
Deleted inode 223551113 has zero dtime.  Fix? no&lt;/p&gt;

&lt;p&gt;Pass 2: Checking directory structure&lt;br/&gt;
Pass 3: Checking directory connectivity&lt;br/&gt;
Pass 4: Checking reference counts&lt;br/&gt;
Pass 5: Checking group summary information&lt;br/&gt;
Block bitmap differences:  -714837 -255423491&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #0 (2880, counted=2888).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #20 (2105, counted=2107).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #15590 (16382, counted=16384).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #15595 (16383, counted=16384).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #15610 (16383, counted=16384).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong for group #15612 (16383, counted=16384).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Free blocks count wrong (197822759, counted=197822774).&lt;br/&gt;
Fix? no&lt;/p&gt;

&lt;p&gt;Inode bitmap differences:  -223551113&lt;br/&gt;
Fix? no&lt;/p&gt;


&lt;p&gt;nbp7-MDT0000: ********** WARNING: Filesystem still has errors **********&lt;/p&gt;


&lt;p&gt;    53143594 inodes used (9.90%, out of 536870912)&lt;br/&gt;
         542 non-contiguous files (0.0%)&lt;br/&gt;
       35450 non-contiguous directories (0.1%)&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;of inodes with ind/dind/tind blocks: 8529/37/0&lt;br/&gt;
    70612697 blocks used (26.31%, out of 268435456)&lt;br/&gt;
           0 bad blocks&lt;br/&gt;
        8072 large files&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;    51906395 regular files&lt;br/&gt;
      437044 directories&lt;br/&gt;
           0 character device files&lt;br/&gt;
           0 block device files&lt;br/&gt;
           0 fifos&lt;br/&gt;
        1961 links&lt;br/&gt;
      800143 symbolic links (360530 fast symbolic links)&lt;br/&gt;
           2 sockets&lt;br/&gt;
------------&lt;br/&gt;
    53145545 files&lt;/p&gt;
</comment>
                            <comment id="74489" author="green" created="Tue, 7 Jan 2014 18:18:44 +0000"  >&lt;p&gt;Yes, safe to run e2fsck.&lt;/p&gt;</comment>
                            <comment id="74495" author="mhanafi" created="Tue, 7 Jan 2014 18:55:32 +0000"  >&lt;p&gt;ran e2fsck it fixed the issue. Close case.&lt;/p&gt;</comment>
                            <comment id="74510" author="pjones" created="Tue, 7 Jan 2014 20:10:05 +0000"  >&lt;p&gt;Thanks Mahmoud. Before we close this ticket we should probably assess whether there is some kind of bug in tunefs that we should address. &lt;/p&gt;

&lt;p&gt;Niu, could you please comment on this?&lt;/p&gt;</comment>
                            <comment id="74540" author="niu" created="Wed, 8 Jan 2014 02:31:58 +0000"  >&lt;p&gt;&quot;tune2fs -O quota&quot; command just use some standard interface to unlink/create/write quota files, it&apos;s unlikely that it could corrupt the block bitmap.&lt;/p&gt;</comment>
                            <comment id="74615" author="jfc" created="Wed, 8 Jan 2014 22:34:27 +0000"  >&lt;p&gt;Can I mark this as resolved? Thanks.&lt;/p&gt;</comment>
                            <comment id="74622" author="mhanafi" created="Wed, 8 Jan 2014 23:36:57 +0000"  >&lt;p&gt;I think so. The block bitmap errors were most likely there before we ran the quota options.&lt;/p&gt;</comment>
                            <comment id="74888" author="pjones" created="Tue, 14 Jan 2014 02:14:28 +0000"  >&lt;p&gt;ok thanks Mahmoud&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwcbr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>12200</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10023"><![CDATA[4]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>