<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:31:18 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3138] E2fsck  sees some &quot;Deleted inode 14 has zero dtime.  Fix? no&quot; after upgraded from 1.8 to 2.4</title>
                <link>https://jira.whamcloud.com/browse/LU-3138</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;I found this problem when I was trying to fix some DNE problem. But it turns out this problem exists on single MDT as well. This is easy to reproduce this script&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;
LOAD=y sh llmount.sh
tar -jxvf disk1_8-ldiskfs.tar.bz2
cp ./mdt /tmp/lustre-mdt1
cp ./ost /tmp/lustre-ost1
../utils/tunefs.lustre --writeconf --mgsnode=testnode /tmp/lustre-mdt1
e2fsck -fnvd /tmp/lustre-mdt1
../utils/tunefs.lustre --writeconf --mgsnode=testnode /tmp/lustre-ost1
mount -t lustre -o loop /tmp/lustre-mdt1 /mnt/mds1
mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1
mount -t lustre testnode:/t32fs  /mnt/lustre
echo sleep 5 seconds
sleep 5
umount /mnt/lustre
umount /mnt/ost1
umount /mnt/mds1

e2fsck -fnvd /tmp/lustre-mdt1
sh llmountcleanup.sh
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;here is the e2fsck result&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;e2fsck -fnvd /tmp/lustre-mdt1
+ e2fsck -fnvd /tmp/lustre-mdt1
e2fsck 1.42.3.wc3 (15-Aug-2012)
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 14 has zero dtime.  Fix? no

Deleted inode 15 has zero dtime.  Fix? no

Deleted inode 16 has zero dtime.  Fix? no

Deleted inode 20 has zero dtime.  Fix? no

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences:  -(16104--16112) -(16115--16119)
Fix? no

Inode bitmap differences:  -(14--16) -20
Fix? no


t32fs-MDT0000: ********** WARNING: Filesystem still has errors **********


     756 inodes used (0.76%)
       6 non-contiguous files (0.8%)
       0 non-contiguous directories (0.0%)
         # of inodes with ind/dind/tind blocks: 0/0/0
   16955 blocks used (33.91%)
       0 bad blocks
       1 large file

     181 regular files
      56 directories
       0 character device files
       0 block device files
       0 fifos
       6 links
     506 symbolic links (506 fast symbolic links)
       0 sockets
--------
     749 files
sh llmountcleanup.sh
+ sh llmountcleanup.sh
Stopping clients: testnode /mnt/lustre (opts:-f)
Stopping clients: testnode /mnt/lustre2 (opts:-f)
modules unloaded.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And these inodes turns out to be the old config log&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;/mnt/mds1/CONFIGS:
13 mountdata  14 t32fs-client  15 t32fs-MDT0000  20 t32fs-OST0000  16 t32fs-params
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt; </description>
                <environment></environment>
        <key id="18305">LU-3138</key>
            <summary>E2fsck  sees some &quot;Deleted inode 14 has zero dtime.  Fix? no&quot; after upgraded from 1.8 to 2.4</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="di.wang">Di Wang</assignee>
                                    <reporter username="di.wang">Di Wang</reporter>
                        <labels>
                            <label>LB</label>
                    </labels>
                <created>Tue, 9 Apr 2013 18:41:46 +0000</created>
                <updated>Mon, 22 Apr 2013 16:17:36 +0000</updated>
                            <resolved>Mon, 22 Apr 2013 16:17:36 +0000</resolved>
                                    <version>Lustre 2.4.0</version>
                                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="56027" author="adilger" created="Wed, 10 Apr 2013 17:26:52 +0000"  >&lt;p&gt;Is this a problem with nlinks on these files (i.e. they are accidentally being unlinked), or are they intentionally being unlinked and the only problem is that osd-ldiskfs is not setting dtime on the unlinked inodes for some reason?  I recall we had some problems with link counts for local objects, is there possibly already a patch for this?&lt;/p&gt;</comment>
                            <comment id="56044" author="pjones" created="Wed, 10 Apr 2013 18:44:36 +0000"  >&lt;p&gt;Emoly&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="56057" author="emoly.liu" created="Thu, 11 Apr 2013 02:15:53 +0000"  >&lt;p&gt;I will have a check.&lt;/p&gt;</comment>
                            <comment id="56067" author="emoly.liu" created="Thu, 11 Apr 2013 06:26:46 +0000"  >&lt;p&gt;I can&apos;t reproduce this problem with the test script. The test output is:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@centos6-3 tests]# sh 3138_tests.sh 
++ hostname
+ testnode=centos6-3
+ LOAD=y
+ sh llmount.sh
Loading modules from /root/master/lustre/tests/..
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2&apos;
debug=vfstrace rpctrace dlmtrace neterror ha config ioctl super
subsystem_debug=all -lnet -lnd -pinger
gss/krb5 is not supported
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
+ tar -jxvf disk1_8-ldiskfs.tar.bz2
arch
bspace
commit
ispace
kernel
list
mdt
ost
sha1sums
+ cp -f ./mdt /tmp/lustre-mdt1
+ cp -f ./ost /tmp/lustre-ost1
+ ../utils/tunefs.lustre --writeconf --mgsnode=centos6-3 /tmp/lustre-mdt1
checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target:     t32fs-MDT0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x5
              (MDT MGS )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr,acl
Parameters: sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0


   Permanent disk data:
Target:     t32fs=MDT0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x105
              (MDT MGS writeconf )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr,acl
Parameters: sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0 mgsnode=10.211.55.7@tcp

Writing CONFIGS/mountdata
+ e2fsck -fnvd /tmp/lustre-mdt1
e2fsck 1.42.6.wc2 (10-Dec-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

         625 inodes used (0.62%, out of 100000)
           6 non-contiguous files (1.0%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
       16741 blocks used (33.48%, out of 50000)
           0 bad blocks
           1 large file

          95 regular files
          15 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
         506 symbolic links (506 fast symbolic links)
           0 sockets
------------
         616 files
+ ../utils/tunefs.lustre --writeconf --mgsnode=centos6-3 /tmp/lustre-ost1
checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target:     t32fs-OST0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x2
              (OST )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: sys.timeout=20 mgsnode=192.168.203.129@tcp


   Permanent disk data:
Target:     t32fs=OST0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x102
              (OST writeconf )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: sys.timeout=20 mgsnode=192.168.203.129@tcp mgsnode=10.211.55.7@tcp

Writing CONFIGS/mountdata
+ mount -t lustre -o loop /tmp/lustre-mdt1 /mnt/mds1
+ mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1
+ mount -t lustre centos6-3:/t32fs /mnt/lustre
+ echo sleep 5 seconds
sleep 5 seconds
+ sleep 5
+ umount /mnt/lustre
+ umount /mnt/ost1
+ umount /mnt/mds1
+ e2fsck -fnvd /tmp/lustre-mdt1
e2fsck 1.42.6.wc2 (10-Dec-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

         751 inodes used (0.75%, out of 100000)
           6 non-contiguous files (0.8%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
       16938 blocks used (33.88%, out of 50000)
           0 bad blocks
           1 large file

         180 regular files
          56 directories
           0 character device files
           0 block device files
           0 fifos
           6 links
         506 symbolic links (506 fast symbolic links)
           0 sockets
------------
         748 files
+ sh llmountcleanup.sh
Stopping clients: centos6-3 /mnt/lustre (opts:-f)
Stopping clients: centos6-3 /mnt/lustre2 (opts:-f)
modules unloaded.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The top commit log of my working branch is &quot;2fede8c &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3026&quot; title=&quot;Failure on test suite sanity-benchmark test_iozone&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3026&quot;&gt;&lt;del&gt;LU-3026&lt;/del&gt;&lt;/a&gt; llite: setattr to override permission check for owner&quot;. Probably as Andreas said there was already a patch &quot;landed&quot; for this?&lt;/p&gt;</comment>
                            <comment id="56069" author="emoly.liu" created="Thu, 11 Apr 2013 07:10:57 +0000"  >&lt;p&gt;I fetch the latest commit &quot;9a01e2b &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3000&quot; title=&quot;sanity 27u: 1000 objects created on OST-0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3000&quot;&gt;&lt;del&gt;LU-3000&lt;/del&gt;&lt;/a&gt;&quot; and still can&apos;t reproduce this problem.&lt;/p&gt;

&lt;p&gt;Wangdi, could you please update your master branch and see if this problem still exists? Thanks.&lt;/p&gt;</comment>
                            <comment id="56284" author="di.wang" created="Mon, 15 Apr 2013 01:28:06 +0000"  >&lt;p&gt;Hmm, problem is still there in my local tests with current master, though it can not be reproduced every time, maybe you can try wait 10 seconds after mount client? &lt;br/&gt;
Andreas, yes, these logs are supposed to be removed during mount process, if we add --writeconf by tunefs or mount -o.  So Emoly, you probably needs to check anything wrong in mgs_erase_log, IMHO. Thanks.&lt;/p&gt;</comment>
                            <comment id="56372" author="emoly.liu" created="Tue, 16 Apr 2013 02:22:28 +0000"  >&lt;p&gt;As suggested, I used 2 MDTs, sleeped 10 seconds before umount, changed e2fsprogs to 1.42.3.wc3 (15-Aug-2012) and ran 10 times, but still can&apos;t reproduce it.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@centos6-3 tests]# sh 3138_tests.sh  
++ hostname
+ testnode=centos6-3
+ wait_sec=10
+ LOAD=y
+ sh llmount.sh
Loading modules from /root/master/lustre/tests/..
detected 2 online CPUs by sysfs
Force libcfs to create 2 CPU partitions
../libcfs/libcfs/libcfs options: &apos;cpu_npartitions=2&apos;
debug=vfstrace rpctrace dlmtrace neterror ha config ioctl super
subsystem_debug=all -lnet -lnd -pinger
gss/krb5 is not supported
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
+ tar -jxvf disk1_8-ldiskfs.tar.bz2
arch
bspace
commit
ispace
kernel
list
mdt
ost
sha1sums
+ cp -f ./mdt /tmp/lustre-mdt1
+ cp -f ./ost /tmp/lustre-ost1
+ ../utils/tunefs.lustre --writeconf --mgsnode=centos6-3 /tmp/lustre-mdt1
checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target:     t32fs-MDT0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x5
              (MDT MGS )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr,acl
Parameters: sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0


   Permanent disk data:
Target:     t32fs=MDT0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x105
              (MDT MGS writeconf )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr,acl
Parameters: sys.timeout=20 lov.stripesize=1048576 lov.stripecount=0 mgsnode=10.211.55.7@tcp

Writing CONFIGS/mountdata
+ e2fsck -fnvd /tmp/lustre-mdt1
e2fsck 1.42.3.wc3 (15-Aug-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

     625 inodes used (0.62%)
       6 non-contiguous files (1.0%)
       0 non-contiguous directories (0.0%)
         # of inodes with ind/dind/tind blocks: 0/0/0
   16741 blocks used (33.48%)
       0 bad blocks
       1 large file

      95 regular files
      15 directories
       0 character device files
       0 block device files
       0 fifos
       0 links
     506 symbolic links (506 fast symbolic links)
       0 sockets
--------
     616 files
+ ../utils/tunefs.lustre --writeconf --mgsnode=centos6-3 /tmp/lustre-ost1
checking for existing Lustre data: found
Reading CONFIGS/mountdata

   Read previous values:
Target:     t32fs-OST0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x2
              (OST )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: sys.timeout=20 mgsnode=192.168.203.129@tcp


   Permanent disk data:
Target:     t32fs=OST0000
Index:      0
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x102
              (OST writeconf )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: sys.timeout=20 mgsnode=192.168.203.129@tcp mgsnode=10.211.55.7@tcp

Writing CONFIGS/mountdata
+ ../utils/mkfs.lustre --reformat --mgsnode=centos6-3 --mdt --index 1 --fsname=t32fs --device-size=104800 /tmp/lustre-mdt2

   Permanent disk data:
Target:     t32fs:MDT0001
Index:      1
Lustre FS:  t32fs
Mount type: ldiskfs
Flags:      0x61
              (MDT first_time update )
Persistent mount opts: user_xattr,errors=remount-ro
Parameters: mgsnode=10.211.55.7@tcp

formatting backing filesystem ldiskfs on /dev/loop0
	target name  t32fs:MDT0001
	4k blocks     26200
	options        -I 512 -i 2048 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F
mkfs_cmd = mke2fs -j -b 4096 -L t32fs:MDT0001  -I 512 -i 2048 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F /dev/loop0 26200
Writing CONFIGS/mountdata
+ mount -t lustre -o loop /tmp/lustre-mdt1 /mnt/mds1
+ mount -t lustre -o loop /tmp/lustre-mdt2 /mnt/mds2
+ mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1
+ mount -t lustre centos6-3:/t32fs /mnt/lustre
+ echo sleep 10 seconds
sleep 10 seconds
+ sleep 10
+ umount /mnt/lustre
+ umount /mnt/ost1
+ umount /mnt/mds1
+ umount /mnt/mds2
+ e2fsck -fnvd /tmp/lustre-mdt1
e2fsck 1.42.3.wc3 (15-Aug-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

     752 inodes used (0.75%)
       6 non-contiguous files (0.8%)
       0 non-contiguous directories (0.0%)
         # of inodes with ind/dind/tind blocks: 0/0/0
   16942 blocks used (33.88%)
       0 bad blocks
       1 large file

     180 regular files
      57 directories
       0 character device files
       0 block device files
       0 fifos
       7 links
     506 symbolic links (506 fast symbolic links)
       0 sockets
--------
     750 files
+ sh llmountcleanup.sh
Stopping clients: centos6-3 /mnt/lustre (opts:-f)
Stopping clients: centos6-3 /mnt/lustre2 (opts:-f)
modules unloaded.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="56440" author="di.wang" created="Wed, 17 Apr 2013 05:18:47 +0000"  >&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/6072&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/6072&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="56701" author="pjones" created="Mon, 22 Apr 2013 16:17:36 +0000"  >&lt;p&gt;Landed for 2.43&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvnhr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7618</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>