Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-14511

oak-MDT0000: deleted inode referenced and aborted journal

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • None
    • Lustre 2.12.6
    • None
    • CentOS 7.9
    • 1
    • 9223372036854775807

    Description

      Hello, I think an old problem fo ours came back to haunt us, that I originally described in LU-11578, that was because we tried lfs migrate -m on a directory with early 2.10.x (then we stopped and then lfs migrate -m was disabled). Since then, we have upgraded Oak to Lustre 2.12, and just yesterday, we tried to upgrade from 2.12.5 to 2.12.6 by failing over the MDT but we cannot mount it anymore.

      It's also very likely that since the last umount, I had deleted some files in the bad directory in question /oak/stanford/groups/ruthm/sthiell/anaconda2.off to make some cleanup
      MDT worked fine before un-mounting.

      So at the moment, we can't mount oak-MDT0000 anymore as we hit the following problem:

      [2021-03-10T15:31:29-08:00] [29905.173864] LDISKFS-fs (dm-0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc^M
      [2021-03-10T15:31:29-08:00] [29905.322318] LustreError: 166-1: MGC10.0.2.51@o2ib5: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail^M
      [2021-03-10T15:31:29-08:00] [29905.336575] Lustre: Evicted from MGS (at MGC10.0.2.51@o2ib5_0) after server handle changed from 0xd5a7458f859bc1ba to 0xd5a7458fad8ca5e1^M
      [2021-03-10T15:31:38-08:00] [29914.525782] LDISKFS-fs (dm-4): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,acl,no_mbcache,nodelalloc^M
      [2021-03-10T15:31:38-08:00] [29914.709510] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 459882777^M
      [2021-03-10T15:31:38-08:00] [29914.722944] Aborting journal on device dm-4-8.^M
      [2021-03-10T15:31:38-08:00] [29914.728143] LDISKFS-fs (dm-4): Remounting filesystem read-only^M
      [2021-03-10T15:31:38-08:00] [29914.734658] LustreError: 8949:0:(osd_scrub.c:1758:osd_ios_lookup_one_len()) Fail to find #459882777 in lost+found (11/0): rc = -5^M
      [2021-03-10T15:31:38-08:00] [29914.752101] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060535^M
      [2021-03-10T15:31:38-08:00] [29914.766588] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060536^M
      [2021-03-10T15:31:38-08:00] [29914.779958] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060537^M
      [2021-03-10T15:31:39-08:00] [29914.793611] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060539^M
      [2021-03-10T15:31:39-08:00] [29914.806965] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060540^M
      [2021-03-10T15:31:39-08:00] [29914.820521] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060542^M
      [2021-03-10T15:31:39-08:00] [29914.834339] LDISKFS-fs error (device dm-4): ldiskfs_lookup:1817: inode #11: comm mount.lustre: deleted inode referenced: 627060544^M
      [2021-03-10T15:31:39-08:00] [29914.978593] Lustre: 8962:0:(obd_config.c:1641:class_config_llog_handler()) Skip config outside markers, (inst: 0000000000000000, uuid: , flags: 0x0)^M
      [2021-03-10T15:31:39-08:00] [29914.993510] LustreError: 8962:0:(genops.c:556:class_register_device()) oak-OST0133-osc-MDT0001: already exists, won't add^M
      [2021-03-10T15:31:39-08:00] [29915.005744] LustreError: 8962:0:(obd_config.c:1835:class_config_llog_handler()) MGC10.0.2.51@o2ib5: cfg command failed: rc = -17^M
      [2021-03-10T15:31:39-08:00] [29915.018654] Lustre:    cmd=cf001 0:oak-OST0133-osc-MDT0001  1:osp  2:oak-MDT0001-mdtlov_UUID  ^M
      [2021-03-10T15:31:39-08:00] [29915.018654] ^M
      [2021-03-10T15:31:39-08:00] [29915.029929] LustreError: 3800:0:(mgc_request.c:599:do_requeue()) failed processing log: -17^M
      [2021-03-10T15:31:39-08:00] [29915.036492] LustreError: 8949:0:(llog.c:1398:llog_backup()) MGC10.0.2.51@o2ib5: failed to open backup logfile oak-MDT0000T: rc = -30^M
      [2021-03-10T15:31:39-08:00] [29915.036495] LustreError: 8949:0:(mgc_request.c:1879:mgc_llog_local_copy()) MGC10.0.2.51@o2ib5: failed to copy remote log oak-MDT0000: rc = -30^M
      [2021-03-10T15:31:39-08:00] [29915.046574] Lustre: oak-MDT0000: Not available for connect from 10.51.3.5@o2ib3 (not set up)^M
      [2021-03-10T15:31:39-08:00] [29915.076247] LustreError: 8963:0:(tgt_lastrcvd.c:1133:tgt_client_del()) oak-MDT0000: failed to update server data, skip client 9925e6e6-5de6-4 zeroing, rc -30^M
      [2021-03-10T15:31:39-08:00] [29915.127660] LustreError: 8963:0:(obd_config.c:559:class_setup()) setup oak-MDT0000 failed (-30)^M
      [2021-03-10T15:31:39-08:00] [29915.137373] Lustre:    cmd=cf003 0:oak-MDT0000  1:oak-MDT0000_UUID  2:0  3:oak-MDT0000-mdtlov  4:f  ^M
      [2021-03-10T15:31:39-08:00] [29915.137373] ^M
      [2021-03-10T15:31:39-08:00] [29915.149242] LustreError: 15c-8: MGC10.0.2.51@o2ib5: The configuration from log 'oak-MDT0000' failed (-30). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.^M
      [2021-03-10T15:31:39-08:00] [29915.174853] LustreError: 8949:0:(obd_mount_server.c:1397:server_start_targets()) failed to start server oak-MDT0000: -30^M
      [2021-03-10T15:31:39-08:00] [29915.187010] LustreError: 8949:0:(obd_mount_server.c:1992:server_fill_super()) Unable to start targets: -30^M
      [2021-03-10T15:31:39-08:00] [29915.197797] LustreError: 8949:0:(obd_config.c:610:class_cleanup()) Device 316 not setup^M
      [2021-03-10T15:31:39-08:00] [29915.315800] Lustre: server umount oak-MDT0000 complete^M
      [2021-03-10T15:31:39-08:00] [29915.321537] LustreError: 8949:0:(obd_mount.c:1608:lustre_fill_super()) Unable to mount  (-30)^M
      

      We have been running e2fsck -m 16 -f /dev/mapper/md1-rbod1-ssd-mdt0 for ~7 hours and pass1 is still not finished at this point (it's still running though). This MDT has 857,751,831 used inodes.

      [root@oak-md1-s2 ~]# e2fsck -m 16 -y  /dev/mapper/md1-rbod1-ssd-mdt0
      e2fsck 1.45.6.wc5 (09-Feb-2021)
      oak-MDT0000 contains a file system with errors, check forced.
      Pass 1: Checking inodes, blocks, and sizes
      [Thread 0] Scan group range [0, 3552)
      [Thread 1] Scan group range [3552, 7104)
      [Thread 2] Scan group range [7104, 10656)
      [Thread 3] Scan group range [10656, 14208)
      [Thread 4] Scan group range [14208, 17760)
      [Thread 5] Scan group range [17760, 21312)
      [Thread 6] Scan group range [21312, 24864)
      [Thread 7] Scan group range [24864, 28416)
      [Thread 8] Scan group range [28416, 31968)
      [Thread 9] Scan group range [31968, 35520)
      [Thread 10] Scan group range [35520, 39072)
      [Thread 11] Scan group range [39072, 42624)
      [Thread 12] Scan group range [42624, 46176)
      [Thread 13] Scan group range [46176, 49728)
      [Thread 14] Scan group range [49728, 53280)
      [Thread 15] Scan group range [53280, 57056)
      [Thread 15] Scanned group range [53280, 57056), inodes 9005176
      [Thread 3] Inode 459888758, i_size is 13547843328140, should be 0.  [Thread 3] Fix? yes
      
      [Thread 14] Scanned group range [49728, 53280), inodes 59487037
      [Thread 12] Scanned group range [42624, 46176), inodes 53210338
      [Thread 2] Scanned group range [7104, 10656), inodes 61985386
      [Thread 1] Scanned group range [3552, 7104), inodes 61737338
      [Thread 11] Scanned group range [39072, 42624), inodes 62681960
      

      This is our ldiskfs specs for oak-MDT0000:

      [root@oak-md1-s2 ~]# dumpe2fs -h /dev/mapper/md1-rbod1-ssd-mdt0 
      dumpe2fs 1.45.6.wc5 (09-Feb-2021)
      Filesystem volume name:   oak-MDT0000
      Last mounted on:          /
      Filesystem UUID:          0ed1cfdd-8e25-4b6b-9cb9-7be1e89d70ad
      Filesystem magic number:  0xEF53
      Filesystem revision #:    1 (dynamic)
      Filesystem features:      has_journal ext_attr resize_inode dir_index filetype mmp flex_bg dirdata sparse_super large_file huge_file uninit_bg dir_nlink quota project
      Filesystem flags:         signed_directory_hash 
      Default mount options:    user_xattr acl
      Filesystem state:         clean with errors
      Errors behavior:          Continue
      Filesystem OS type:       Linux
      Inode count:              1869611008
      Block count:              934803456
      Reserved block count:     46740172
      Free blocks:              354807380
      Free inodes:              1011859177
      First block:              0
      Block size:               4096
      Fragment size:            4096
      Reserved GDT blocks:      787
      Blocks per group:         16384
      Fragments per group:      16384
      Inodes per group:         32768
      Inode blocks per group:   4096
      Flex block group size:    16
      Filesystem created:       Mon Feb 13 12:36:07 2017
      Last mount time:          Wed Mar 10 15:33:05 2021
      Last write time:          Wed Mar 10 17:13:04 2021
      Mount count:              17
      Maximum mount count:      -1
      Last checked:             Tue Sep 10 06:37:13 2019
      Check interval:           0 (<none>)
      Lifetime writes:          249 TB
      Reserved blocks uid:      0 (user root)
      Reserved blocks gid:      0 (group root)
      First inode:              11
      Inode size:	          512
      Required extra isize:     28
      Desired extra isize:      28
      Journal inode:            8
      Default directory hash:   half_md4
      Directory Hash Seed:      be3bd996-8da4-4d22-80e4-e7a4c8ce22a0
      Journal backup:           inode blocks
      FS Error count:           16
      First error time:         Wed Mar 10 15:31:38 2021
      First error function:     ldiskfs_lookup
      First error line #:       1817
      First error inode #:      11
      First error block #:      0
      Last error time:          Wed Mar 10 15:32:03 2021
      Last error function:      ldiskfs_lookup
      Last error line #:        1817
      Last error inode #:       11
      Last error block #:       0
      MMP block number:         13560
      MMP update interval:      5
      User quota inode:         3
      Group quota inode:        4
      Project quota inode:      325
      Journal features:         journal_incompat_revoke
      Journal size:             4096M
      Journal length:           1048576
      Journal sequence:         0x4343e81b
      Journal start:            0
      MMP_block:
          mmp_magic: 0x4d4d50
          mmp_check_interval: 5
          mmp_sequence: 0xe24d4d50
          mmp_update_date: Thu Mar 11 02:02:50 2021
          mmp_update_time: 1615456970
          mmp_node_name: oak-md1-s2
          mmp_device_name: /dev/mapper/md1-rbod1-ssd-mdt0
      

      Attachments

        Issue Links

          Activity

            People

              bzzz Alex Zhuravlev
              sthiell Stephane Thiell
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: