Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-11578

ldiskfs_map_blocks: comm mdt00_100: lblock 0 mapped to illegal pblock

Details

    • Bug
    • Resolution: Fixed
    • Major
    • None
    • Lustre 2.10.4
    • None
    • CentOS 7.4
    • 3
    • 9223372036854775807

    Description

      We hit the following issue today on Oak's MDT0. We just added another MDT a few days ago so it was empty but I started today to lfs migrate a test directory (from MDT0 to MDT1) when this happened on MDT0. lfs migrate did actually work for a while (about ~40k inodes have been migrated) until MDT0 did this:

      Oct 26 17:26:13 oak-md1-s2 kernel: LDISKFS-fs error (device dm-0): ldiskfs_map_blocks:594: inode #659619751: block 774843950: comm mdt00_100: lblock 0 mapped to illegal pblock (length 1)
      Oct 26 17:26:13 oak-md1-s2 kernel: Aborting journal on device dm-0-8.
      Oct 26 17:26:13 oak-md1-s2 kernel: LustreError: 3844:0:(osd_handler.c:1586:osd_trans_commit_cb()) transaction @0xffff881b024c6a80 commit error: 2
      Oct 26 17:26:13 oak-md1-s2 kernel: LDISKFS-fs (dm-0): Remounting filesystem read-only
      

      I performed a fsck with the new e2fsprogs-1.44.3.wc1-0.el7.x86_64
      Not sure this fixed our issue as I don't see any reference to inode 659619751 in it. I'm attaching the full fsck logs.

      Hopefully, MDT0 then restarted without problem but I haven't touched the partially migrated directory at this time to avoid further issues on a Friday night on this production system. My feeling is that lfs migrate hit some migrated inode that somehow got corrupted on MDT0? The system has been working fine for weeks now (we only have the issue reported in LU-11205 regarding changelog_clear errors), so I assume this is due to my running lfs migrate. I can perform troubleshooting next week. Any recommendation to avoid this in the future is welcome, thanks much!

      Stephane

      Attachments

        Issue Links

          Activity

            [LU-11578] ldiskfs_map_blocks: comm mdt00_100: lblock 0 mapped to illegal pblock
            ys Yang Sheng added a comment -

            Close as duplicated with LU-10581.

            ys Yang Sheng added a comment - Close as duplicated with LU-10581 .
            ys Yang Sheng added a comment -

            Hi, Stephane,

            Looks like it is similar with LU-10581, Some code dec nlink and then fail to exit but leave inode as is. So the cache is inconsistent with on-disk data. I'll try to dig out where it occurred. Thanks for your help.

            Thanks,
            YangSheng

            ys Yang Sheng added a comment - Hi, Stephane, Looks like it is similar with LU-10581 , Some code dec nlink and then fail to exit but leave inode as is. So the cache is inconsistent with on-disk data. I'll try to dig out where it occurred. Thanks for your help. Thanks, YangSheng

            Hi YangSheng!

            Results:

            [root@oak-md1-s2 ~]# debugfs /dev/mapper/md1-rbod1-mdt0
            debugfs 1.44.3.wc1 (23-July-2018)
            debugfs:  ncheck 564154962
            Inode	Pathname
            564154962	/REMOTE_PARENT_DIR/0x200002b64:0x65a2:0x0/openssl-1.0.2l-0/ssl/cacert.pem
            debugfs:  stat /REMOTE_PARENT_DIR/0x200002b64:0x65a2:0x0/openssl-1.0.2l-0/ssl/cacert.pem
            Inode: 564154962   Type: regular    Mode:  0640   Flags: 0x0
            Generation: 529132813    Version: 0x00000005:44289388
            User: 282232   Group:  3199   Size: 0
            File ACL: 0
            Links: 1   Blockcount: 0
            Fragment:  Address: 0    Number: 0    Size: 0
             ctime: 0x59790e8c:00000000 -- Wed Jul 26 14:50:04 2017
             atime: 0x597945cf:00000000 -- Wed Jul 26 18:45:51 2017
             mtime: 0x59270fc5:00000000 -- Thu May 25 10:09:25 2017
            crtime: 0x59790e35:e633c9d8 -- Wed Jul 26 14:48:37 2017
            Size of extra inode fields: 28
            Extended attributes:
              system.posix_acl_access (28) = 01 00 00 00 01 00 06 00 04 00 07 00 08 00 07 00 7f 0c 00 00 10 00 04 00 20 00 00 00 
              trusted.lma (24) = 00 00 00 00 00 00 00 00 64 2b 00 00 02 00 00 00 11 ce 00 00 00 00 00 00 
              lma: fid=[0x200002b64:0xce11:0x0] compat=0 incompat=0
              trusted.lov (56)
              trusted.link (80)
            BLOCKS:
            
            debugfs:
            

            On a client:

            [root@oak-gw02 ~]# lfs fid2path /oak [0x200002b64:0xce11:0x0]
            /oak/stanford/groups/ruthm/sthiell/anaconda2.off/pkgs/openssl-1.0.2l-0/ssl/cacert.pem
            /oak/stanford/groups/ruthm/sthiell/anaconda2.off/ssl/cacert.pem
            [root@oak-gw02 ~]# 
            

             

            sthiell Stephane Thiell added a comment - Hi YangSheng! Results: [root@oak-md1-s2 ~]# debugfs /dev/mapper/md1-rbod1-mdt0 debugfs 1.44.3.wc1 (23-July-2018) debugfs: ncheck 564154962 Inode Pathname 564154962 /REMOTE_PARENT_DIR/0x200002b64:0x65a2:0x0/openssl-1.0.2l-0/ssl/cacert.pem debugfs: stat /REMOTE_PARENT_DIR/0x200002b64:0x65a2:0x0/openssl-1.0.2l-0/ssl/cacert.pem Inode: 564154962 Type: regular Mode: 0640 Flags: 0x0 Generation: 529132813 Version: 0x00000005:44289388 User: 282232 Group: 3199 Size: 0 File ACL: 0 Links: 1 Blockcount: 0 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x59790e8c:00000000 -- Wed Jul 26 14:50:04 2017 atime: 0x597945cf:00000000 -- Wed Jul 26 18:45:51 2017 mtime: 0x59270fc5:00000000 -- Thu May 25 10:09:25 2017 crtime: 0x59790e35:e633c9d8 -- Wed Jul 26 14:48:37 2017 Size of extra inode fields: 28 Extended attributes: system.posix_acl_access (28) = 01 00 00 00 01 00 06 00 04 00 07 00 08 00 07 00 7f 0c 00 00 10 00 04 00 20 00 00 00 trusted.lma (24) = 00 00 00 00 00 00 00 00 64 2b 00 00 02 00 00 00 11 ce 00 00 00 00 00 00 lma: fid=[0x200002b64:0xce11:0x0] compat=0 incompat=0 trusted.lov (56) trusted.link (80) BLOCKS: debugfs: On a client: [root@oak-gw02 ~]# lfs fid2path /oak [0x200002b64:0xce11:0x0] /oak/stanford/groups/ruthm/sthiell/anaconda2.off/pkgs/openssl-1.0.2l-0/ssl/cacert.pem /oak/stanford/groups/ruthm/sthiell/anaconda2.off/ssl/cacert.pem [root@oak-gw02 ~]#  
            ys Yang Sheng added a comment -

            Hi, Stephane,

            I have got some data from vmcore as below:

            crash> struct osd_object ffff88202b660400 -x
            struct osd_object {
              oo_dt = {
                do_lu = {
                  lo_header = 0xffff881c939377b0,
                  lo_dev = 0xffff882027e22000,
                  lo_ops = 0xffffffffc1058060 <osd_lu_obj_ops>,
                  lo_linkage = {
                    next = 0xffff881c939377f0,
                    prev = 0xffff881c87ad52c8
                  },
                  lo_dev_ref = {<No data fields>}
                },
                do_ops = 0xffffffffc1057f60 <osd_obj_ops>,
                do_body_ops = 0xffffffffc1059b20 <osd_body_ops>,
                do_index_ops = 0x0
              },
              oo_inode = 0xffff88125f9d9a38,
              oo_hl_head = 0xffff881c87497200,
              oo_ext_idx_sem = {
                {
                  count = {
                    counter = 0x0
                  },
                  __UNIQUE_ID_rh_kabi_hide3 = {
                    count = 0x0
                  },
                  {<No data fields>}
                },
                wait_lock = {
                  raw_lock = {
                    val = {
                      counter = 0x0
                    }
                  }
                },
                osq = {
                  tail = {
                    counter = 0x0
                  }
                },
                wait_list = {
                  next = 0xffff88202b660460
                },
                owner = 0x0
              },
              oo_sem = {
                {
                  count = {
                    counter = 0x0
                  },
                  __UNIQUE_ID_rh_kabi_hide3 = {
                    count = 0x0
                  },
                  {<No data fields>}
                },
                wait_lock = {
                  raw_lock = {
                    val = {
                      counter = 0x0
                    }
                  }
                },
                osq = {
                  tail = {
                    counter = 0x0
                  }
                },
                wait_list = {
                  next = 0xffff88202b660480
                },
                owner = 0x0
              },
              oo_dir = 0x0,
              oo_guard = {
                {
                  rlock = {
                    raw_lock = {
                      val = {
                        counter = 0x0
                      }
                    }
                  }
                }
              },
              oo_destroyed = 0x0,
              oo_lma_flags = 0x0,
              oo_compat_dot_created = 0x1,
              oo_compat_dotdot_created = 0x1,
              oo_owner = 0x0,
              oo_xattr_list = {
                next = 0xffff88202b6604b8,
                prev = 0xffff88202b6604b8
              }
            }
            .....
            crash> struct inode 0xffff88125f9d9a38
            struct inode {
              i_mode = 33184,
              i_opflags = 0,
              i_uid = {
                val = 282232
              },
              i_gid = {
                val = 3199
              },
              i_flags = 128,
              i_acl = 0xffffffffffffffff,
              i_default_acl = 0xffffffffffffffff,
              i_op = 0xffffffffc0f5afc0 <ldiskfs_file_inode_operations>,
              i_sb = 0xffff8802d8e7b000,
              i_mapping = 0xffff88125f9d9b88,
              i_security = 0x0,
              i_ino = 564154962,
              {
                i_nlink = 0,
                __i_nlink = 0
              },
              i_rdev = 0,
              i_size = 0,
              i_atime = {
                tv_sec = 1501119951,
                tv_nsec = 0
              },
            ...........
            crash> struct lu_object_header 0xffff881c939377b0 -x
            struct lu_object_header {
              loh_fid = {
                f_seq = 0x200002b64,
                f_oid = 0xce11,
                f_ver = 0x0
              },
              loh_flags = 0x0,
              loh_ref = {
                counter = 0x0
              },
              loh_attr = 0x8001,
              loh_hash = {
                next = 0x0,
                pprev = 0xffffc90016edd928
              },
              loh_lru = {
                next = 0xffff881c939377e0,
                prev = 0xffff881c939377e0
              },
              loh_layers = {
                next = 0xffff881c93937818,
                prev = 0xffff88202b660418
              },
              loh_reference = {<No data fields>}
            }
            ..........
            

            So please help collect info as below:

            on MDS:
            # debugfs [your mds device]
            debugfs: ncheck 564154962
            Inode	Pathname
            564154962   [pathname]
            debugfs: stat [pathname]
            paste output in here
            ......
            On client:
            # lfs fid2path <your mount point> [0x200002b64:0xce11:0x0]
            
            

            Thanks,
            YangSheng

            ys Yang Sheng added a comment - Hi, Stephane, I have got some data from vmcore as below: crash> struct osd_object ffff88202b660400 -x struct osd_object { oo_dt = { do_lu = { lo_header = 0xffff881c939377b0, lo_dev = 0xffff882027e22000, lo_ops = 0xffffffffc1058060 <osd_lu_obj_ops>, lo_linkage = { next = 0xffff881c939377f0, prev = 0xffff881c87ad52c8 }, lo_dev_ref = {<No data fields>} }, do_ops = 0xffffffffc1057f60 <osd_obj_ops>, do_body_ops = 0xffffffffc1059b20 <osd_body_ops>, do_index_ops = 0x0 }, oo_inode = 0xffff88125f9d9a38, oo_hl_head = 0xffff881c87497200, oo_ext_idx_sem = { { count = { counter = 0x0 }, __UNIQUE_ID_rh_kabi_hide3 = { count = 0x0 }, {<No data fields>} }, wait_lock = { raw_lock = { val = { counter = 0x0 } } }, osq = { tail = { counter = 0x0 } }, wait_list = { next = 0xffff88202b660460 }, owner = 0x0 }, oo_sem = { { count = { counter = 0x0 }, __UNIQUE_ID_rh_kabi_hide3 = { count = 0x0 }, {<No data fields>} }, wait_lock = { raw_lock = { val = { counter = 0x0 } } }, osq = { tail = { counter = 0x0 } }, wait_list = { next = 0xffff88202b660480 }, owner = 0x0 }, oo_dir = 0x0, oo_guard = { { rlock = { raw_lock = { val = { counter = 0x0 } } } } }, oo_destroyed = 0x0, oo_lma_flags = 0x0, oo_compat_dot_created = 0x1, oo_compat_dotdot_created = 0x1, oo_owner = 0x0, oo_xattr_list = { next = 0xffff88202b6604b8, prev = 0xffff88202b6604b8 } } ..... crash> struct inode 0xffff88125f9d9a38 struct inode { i_mode = 33184, i_opflags = 0, i_uid = { val = 282232 }, i_gid = { val = 3199 }, i_flags = 128, i_acl = 0xffffffffffffffff, i_default_acl = 0xffffffffffffffff, i_op = 0xffffffffc0f5afc0 <ldiskfs_file_inode_operations>, i_sb = 0xffff8802d8e7b000, i_mapping = 0xffff88125f9d9b88, i_security = 0x0, i_ino = 564154962, { i_nlink = 0, __i_nlink = 0 }, i_rdev = 0, i_size = 0, i_atime = { tv_sec = 1501119951, tv_nsec = 0 }, ........... crash> struct lu_object_header 0xffff881c939377b0 -x struct lu_object_header { loh_fid = { f_seq = 0x200002b64, f_oid = 0xce11, f_ver = 0x0 }, loh_flags = 0x0, loh_ref = { counter = 0x0 }, loh_attr = 0x8001, loh_hash = { next = 0x0, pprev = 0xffffc90016edd928 }, loh_lru = { next = 0xffff881c939377e0, prev = 0xffff881c939377e0 }, loh_layers = { next = 0xffff881c93937818, prev = 0xffff88202b660418 }, loh_reference = {<No data fields>} } .......... So please help collect info as below: on MDS: # debugfs [your mds device] debugfs: ncheck 564154962 Inode Pathname 564154962 [pathname] debugfs: stat [pathname] paste output in here ...... On client: # lfs fid2path <your mount point> [0x200002b64:0xce11:0x0] Thanks, YangSheng
            ys Yang Sheng added a comment -

            Hi, Stephane,

            Sorry for the late. I have downloaded the packages. Will start to analyze later. Thanks for the patience.

            Thanks,
            YangSheng

            ys Yang Sheng added a comment - Hi, Stephane, Sorry for the late. I have downloaded the packages. Will start to analyze later. Thanks for the patience. Thanks, YangSheng

            Hey YangSheng,

            Have you been able to take a look at this? Let me know if you need anything else from me.

            Thanks!
            Stephane

            sthiell Stephane Thiell added a comment - Hey YangSheng, Have you been able to take a look at this? Let me know if you need anything else from me. Thanks! Stephane

            People

              ys Yang Sheng
              sthiell Stephane Thiell
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: