Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6096

sanity test_17m: e2fsck Inode 32775, i_size is 0, should be 4096

Details

    • Bug
    • Resolution: Fixed
    • Blocker
    • Lustre 2.8.0
    • Lustre 2.7.0
    • 3
    • 16971

    Description

      This issue was created by maloo for Bob Glossman <bob.glossman@intel.com>

      This issue relates to the following test suite run of review-ldiskfs: https://testing.hpdd.intel.com/test_sets/277e606e-976d-11e4-bafa-5254006e85c2.

      I note that several other recent similar failures have been marked as LU-3534.
      As I'm unsure of the reasoning for that and this one is seen on el7 I've raised it as new.
      Somebody more expert may decide it's a dup after looking it over.

      The sub-test test_17m failed with the following error:

      e2fsck -fnvd /dev/lvm-Role_MDS/P1
      e2fsck 1.42.12.wc1 (15-Sep-2014)
      shadow-26vm8: check_blocks:2814: increase inode 32775 badness 0 to 1
      shadow-26vm8: check_blocks:2814: increase inode 32776 badness 0 to 1
      Pass 1: Checking inodes, blocks, and sizes
      Inode 32775, i_size is 0, should be 4096.  Fix? no
      
      Inode 32776, i_size is 0, should be 4096.  Fix? no
      

      Please provide additional information about the failure here.

      Info required for matching: sanity 17m

      Attachments

        Issue Links

          Activity

            [LU-6096] sanity test_17m: e2fsck Inode 32775, i_size is 0, should be 4096
            pjones Peter Jones added a comment -

            Landed for 2.8. Resolving ticket. If this every crops up on SLES12 we can always open a ticket for it then

            pjones Peter Jones added a comment - Landed for 2.8. Resolving ticket. If this every crops up on SLES12 we can always open a ticket for it then

            Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/15581/
            Subject: LU-6096 ldiskfs: mark dir's inode dirty
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: bad49e39e301d4367eaead5ee566f5dcacfde8f6

            gerrit Gerrit Updater added a comment - Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/15581/ Subject: LU-6096 ldiskfs: mark dir's inode dirty Project: fs/lustre-release Branch: master Current Patch Set: Commit: bad49e39e301d4367eaead5ee566f5dcacfde8f6

            James, have never seen it on sles12, only on el7. can't promise it's not in sles12 too.

            bogl Bob Glossman (Inactive) added a comment - James, have never seen it on sles12, only on el7. can't promise it's not in sles12 too.

            Bob is this also a problem for SLES12?

            simmonsja James A Simmons added a comment - Bob is this also a problem for SLES12?

            Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: http://review.whamcloud.com/15581
            Subject: LU-6096 osd: mark dir's inode dirty
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: d62ecb63fb898caa8c83f99ea325d5ad3fb96e00

            gerrit Gerrit Updater added a comment - Alex Zhuravlev (alexey.zhuravlev@intel.com) uploaded a new patch: http://review.whamcloud.com/15581 Subject: LU-6096 osd: mark dir's inode dirty Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: d62ecb63fb898caa8c83f99ea325d5ad3fb96e00

            I'm trying to reproduce this locally.. will take some time.

            bzzz Alex Zhuravlev added a comment - I'm trying to reproduce this locally.. will take some time.

            Alex, any comment on this? Is it possible the filesystem is being uncounted without flushing those updates to disk?

            adilger Andreas Dilger added a comment - Alex, any comment on this? Is it possible the filesystem is being uncounted without flushing those updates to disk?

            It looks like this is still being hit. The symptom is slightly different - instead of only the PENDING and .lustre/fid directories being affected, it looks like a larger number of directories are having problems:

            onyx-41vm7: e2fsck 1.42.12.wc1 (15-Sep-2014)
            onyx-41vm7: check_blocks:2814: increase inode 139 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 140 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 142 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 143 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 145 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 146 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 148 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524394 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524397 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524402 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524405 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524408 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524411 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524416 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524421 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524424 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524429 badness 0 to 1
            onyx-41vm7: check_blocks:2814: increase inode 524433 badness 0 to 1
            Pass 1: Checking inodes, blocks, and sizes
            Inode 139, i_size is 0, should be 4096.  Fix? no
            Inode 140, i_size is 0, should be 4096.  Fix? no
            Inode 142, i_size is 0, should be 4096.  Fix? no
            Inode 143, i_size is 0, should be 4096.  Fix? no
            Inode 145, i_size is 0, should be 4096.  Fix? no
            Inode 146, i_size is 0, should be 4096.  Fix? no
            Inode 148, i_size is 0, should be 4096.  Fix? no
            Inode 524394, i_size is 0, should be 4096.  Fix? no
            Inode 524397, i_size is 0, should be 4096.  Fix? no
            Inode 524402, i_size is 0, should be 4096.  Fix? no
            Inode 524405, i_size is 0, should be 4096.  Fix? no
            Inode 524408, i_size is 0, should be 4096.  Fix? no
            Inode 524411, i_size is 0, should be 4096.  Fix? no
            Inode 524416, i_size is 0, should be 4096.  Fix? no
            Inode 524421, i_size is 0, should be 4096.  Fix? no
            Inode 524424, i_size is 0, should be 4096.  Fix? no
            Inode 524429, i_size is 0, should be 4096.  Fix? no
            Inode 524433, i_size is 0, should be 4096.  Fix? no
            

            It still always seems related to EL7 though.

            adilger Andreas Dilger added a comment - It looks like this is still being hit. The symptom is slightly different - instead of only the PENDING and .lustre/fid directories being affected, it looks like a larger number of directories are having problems: onyx-41vm7: e2fsck 1.42.12.wc1 (15-Sep-2014) onyx-41vm7: check_blocks:2814: increase inode 139 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 140 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 142 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 143 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 145 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 146 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 148 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524394 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524397 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524402 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524405 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524408 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524411 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524416 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524421 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524424 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524429 badness 0 to 1 onyx-41vm7: check_blocks:2814: increase inode 524433 badness 0 to 1 Pass 1: Checking inodes, blocks, and sizes Inode 139, i_size is 0, should be 4096. Fix? no Inode 140, i_size is 0, should be 4096. Fix? no Inode 142, i_size is 0, should be 4096. Fix? no Inode 143, i_size is 0, should be 4096. Fix? no Inode 145, i_size is 0, should be 4096. Fix? no Inode 146, i_size is 0, should be 4096. Fix? no Inode 148, i_size is 0, should be 4096. Fix? no Inode 524394, i_size is 0, should be 4096. Fix? no Inode 524397, i_size is 0, should be 4096. Fix? no Inode 524402, i_size is 0, should be 4096. Fix? no Inode 524405, i_size is 0, should be 4096. Fix? no Inode 524408, i_size is 0, should be 4096. Fix? no Inode 524411, i_size is 0, should be 4096. Fix? no Inode 524416, i_size is 0, should be 4096. Fix? no Inode 524421, i_size is 0, should be 4096. Fix? no Inode 524424, i_size is 0, should be 4096. Fix? no Inode 524429, i_size is 0, should be 4096. Fix? no Inode 524433, i_size is 0, should be 4096. Fix? no It still always seems related to EL7 though.
            sarah Sarah Liu added a comment -
            sarah Sarah Liu added a comment - Hit this error again on build #3071 EL7 DNE mode https://testing.hpdd.intel.com/test_sets/683aadc2-1311-11e5-8d21-5254006e85c2

            Patch landed to master for 2.8.0.

            adilger Andreas Dilger added a comment - Patch landed to master for 2.8.0.

            People

              bzzz Alex Zhuravlev
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: