Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-13452

MDT is 100% full, cannot delete files

Details

    • Bug
    • Resolution: Fixed
    • Major
    • None
    • Lustre 2.10.7
    • None
    • RHEL 7.2.1511, lustre version 2.10.7-1
    • 3
    • 9223372036854775807

    Description

      MDS filesystem is full, and we cannot free space on it. It will crash (kernel panic) when trying to delete files.

      Apr 13 16:01:50 emds1 kernel: LDISKFS-fs (md0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
      
      Apr 13 16:01:50 emds1 kernel: LustreError: 11368:0:(osd_handler.c:7131:osd_mount()) echo-MDT0000-osd: failed to set lma on /dev/md0 root inode
      
      Apr 13 16:01:50 emds1 kernel: LustreError: 11368:0:(obd_config.c:558:class_setup()) setup echo-MDT0000-osd failed (-30)
      
      Apr 13 16:01:50 emds1 kernel: LustreError: 11368:0:(obd_mount.c:203:lustre_start_simple()) echo-MDT0000-osd setup error -30
      
      Apr 13 16:01:50 emds1 kernel: LustreError: 11368:0:(obd_mount_server.c:1848:server_fill_super()) Unable to start osd on /dev/md0: -30
      
      Apr 13 16:01:50 emds1 kernel: LustreError: 11368:0:(obd_mount.c:1582:lustre_fill_super()) Unable to mount  (-30)
      
      Apr 13 16:02:01 emds1 kernel: LDISKFS-fs (md0): mounted filesystem with ordered data mode. Opts: user_xattr,errors=remount-ro,no_mbcache,nodelalloc
      
      Apr 13 16:02:01 emds1 kernel: Lustre: MGS: Connection restored to 8f792be4-fada-1d75-0dbd-ec8601cdce7f (at 0@lo)
      
      Apr 13 16:02:01 emds1 kernel: LustreError: 11438:0:(genops.c:478:class_register_device()) echo-OST0000-osc-MDT0000: already exists, won't add
      
      Apr 13 16:02:01 emds1 kernel: LustreError: 11438:0:(obd_config.c:1682:class_config_llog_handler()) MGC10.23.22.104@tcp: cfg command failed: rc = -17
      
      Apr 13 16:02:01 emds1 kernel: Lustre:    cmd=cf001 0:echo-OST0000-osc-MDT0000  1:osp  2:echo-MDT0000-mdtlov_UUID  
      
      Apr 13 16:02:01 emds1 kernel: LustreError: 15c-8: MGC10.23.22.104@tcp: The configuration from log 'echo-MDT0000' failed (-17). This may be the result of communication errors between this node and the MGS, a bad configuration, or other errors. See the syslog for more information.
      
      Apr 13 16:02:01 emds1 kernel: LustreError: 11380:0:(obd_mount_server.c:1389:server_start_targets()) failed to start server echo-MDT0000: -17
      
      Apr 13 16:02:01 emds1 kernel: LustreError: 11380:0:(obd_mount_server.c:1882:server_fill_super()) Unable to start targets: -17
      
      Apr 13 16:02:01 emds1 kernel: Lustre: Failing over echo-MDT0000
      
      Apr 13 16:02:07 emds1 kernel: Lustre: 11380:0:(client.c:2116:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1586818921/real 1586818921]  req@ffff8d748ab38000 x1663898946110400/t0(0) o251->MGC10.23.22.104@tcp@0@lo:26/25 lens 224/224 e 0 to 1 dl 1586818927 ref 2 fl Rpc:XN/0/ffffffff rc 0/-1
      
      Apr 13 16:02:08 emds1 kernel: Lustre: server umount echo-MDT0000 complete
      
      Apr 13 16:02:08 emds1 kernel: LustreError: 11380:0:(obd_mount.c:1582:lustre_fill_super()) Unable to mount  (-17)
      

      Attachments

        1. df.png
          df.png
          21 kB
        2. lustre-log.1586896595.11759.gz
          1.74 MB
        3. lustre-log.1586896595.11759.txt.gz
          2.29 MB
        4. screenlog.0.gz
          64 kB

        Issue Links

          Activity

            [LU-13452] MDT is 100% full, cannot delete files

            I ran some tests on a local filesystem with master, filling up the MDT with DOM files and directories, and while there were some -28 = -ENOSPC errors printed on the console, I didn't have any problems with deleting the files afterward.

            adilger Andreas Dilger added a comment - I ran some tests on a local filesystem with master, filling up the MDT with DOM files and directories, and while there were some -28 = -ENOSPC errors printed on the console, I didn't have any problems with deleting the files afterward.
            pjones Peter Jones added a comment -

            Any update cmcl

            pjones Peter Jones added a comment - Any update cmcl

            Will do. The priority of this ticket can be dropped if you like since the filesystem is now up and running. I'll continue to report back on the e2fsck progress.

            Thank you for your help getting things working so quickly.

            cmcl Campbell Mcleay (Inactive) added a comment - Will do. The priority of this ticket can be dropped if you like since the filesystem is now up and running. I'll continue to report back on the e2fsck progress. Thank you for your help getting things working so quickly.

            If you run e2fsck on the MDT to repair any problems in the local MDT filesystem, then running LFSCK is not strictly required, as it is mostly doing garbage collection and handling cases where there is some inconsistency between the MDT and OSTs.  Generally, LFSCK has been getting better with newer releases of Lustre, so it is probably better to wait until after the upgrade if you want to run it, and unless there are visible problems with the filesystem you may want to wait until there is a good time to run it (e.g. planned system outage).

            adilger Andreas Dilger added a comment - If you run e2fsck on the MDT to repair any problems in the local MDT filesystem, then running LFSCK is not strictly required, as it is mostly doing garbage collection and handling cases where there is some inconsistency between the MDT and OSTs.  Generally, LFSCK has been getting better with newer releases of Lustre, so it is probably better to wait until after the upgrade if you want to run it, and unless there are visible problems with the filesystem you may want to wait until there is a good time to run it (e.g. planned system outage).

            Our directory trees are ridiculously deep and overused for structure of data so this doesn't surprise me. I'm still not sure what changed end of Feb though so we're gonna have to watch this carefully.

            Checking this morning, backups are still running and seem to be somewhat stable so I'll let a good backup complete then try and take it offline to run a new e2fsck.

            After we've had a successful e2fsck, I'd like to upgrade to 2.12.4 but would it be sensible to run an lfsck prior to doing that, or after to get all the updates/bug-fixes?

            cmcl Campbell Mcleay (Inactive) added a comment - Our directory trees are ridiculously deep and overused for structure of data so this doesn't surprise me. I'm still not sure what changed end of Feb though so we're gonna have to watch this carefully. Checking this morning, backups are still running and seem to be somewhat stable so I'll let a good backup complete then try and take it offline to run a new e2fsck. After we've had a successful e2fsck, I'd like to upgrade to 2.12.4 but would it be sensible to run an lfsck prior to doing that, or after to get all the updates/bug-fixes?

            As for the debugfs stat output, it definitely shows that the "link" xattr is large in at least some cases, and would consume an extra block for each such inode. Also, based on the previous e2fsck issue, it seems that there are a very large number of directories compared to regular files, and each directory will also consume at least one block. Based on LU-13197, the filesystem must have at least 180M directories for only 850M inodes, so only about 5 files per directory (although this doesn't take into account the number of hard links).

            adilger Andreas Dilger added a comment - As for the debugfs stat output, it definitely shows that the " link " xattr is large in at least some cases, and would consume an extra block for each such inode. Also, based on the previous e2fsck issue, it seems that there are a very large number of directories compared to regular files, and each directory will also consume at least one block. Based on LU-13197 , the filesystem must have at least 180M directories for only 850M inodes, so only about 5 files per directory (although this doesn't take into account the number of hard links).

            The dir_info e2fsck error appears to be the same as LU-13197, which has a patch to fix it. There is a RHEL7 build of e2fsprogs that is known to fix this specific issue:

            https://build.whamcloud.com/job/e2fsprogs-reviews/arch=x86_64,distro=el7/862/artifact/_topdir/RPMS/x86_64/

            This e2fsck bug was hit at another site that has a very large number of directories (over 180M directories), which is unusual for most cases, but in the case of your symlink trees there are lots of directories with relatively few directories.  The updated e2fsck was confirmed to fix the problem on their filesystem. 

            adilger Andreas Dilger added a comment - The dir_info e2fsck error appears to be the same as LU-13197 , which has a patch to fix it. There is a RHEL7 build of e2fsprogs that is known to fix this specific issue: https://build.whamcloud.com/job/e2fsprogs-reviews/arch=x86_64,distro=el7/862/artifact/_topdir/RPMS/x86_64/ This e2fsck bug was hit at another site that has a very large number of directories (over 180M directories), which is unusual for most cases, but in the case of your symlink trees there are lots of directories with relatively few directories.  The updated e2fsck was confirmed to fix the problem on their filesystem. 

            I am however able now to delete files without a panic.
            I'm going to try and clear space and see if we can get a backup through overnight and then check the logs tomorrow. Looks like another fsck is going to be required...

            cmcl Campbell Mcleay (Inactive) added a comment - I am however able now to delete files without a panic. I'm going to try and clear space and see if we can get a backup through overnight and then check the logs tomorrow. Looks like another fsck is going to be required...

            A couple of debugfs examples:

            debugfs -c -R 'stat /ROOT/ARCHIVE/dirvish/filers/vannfs31/20200328/tree/user_data/CRB_RESTORES/IO-82556/CRB/ldev_2d_elements/SCAN/S_ldev_2d_elements_blood_element_squib_large_flat_a_002_s01/2156x1806/s_ldev_2d_elements_blood_element_squib_large_flat_a_002_s01.1117.exr' /dev/md0
            debugfs 1.45.2.wc1 (27-May-2019)
            /dev/md0: catastrophic mode - not reading inode or group bitmaps
            Inode: 1734097604   Type: regular    Mode:  0444   Flags: 0x0
            Generation: 294602363    Version: 0x00000025:b9345598
            User:  4014   Group:    20   Project:     0   Size: 0
            File ACL: 1084109575
            Links: 22   Blockcount: 8
            Fragment:  Address: 0    Number: 0    Size: 0
             ctime: 0x5e901af0:00000000 -- Fri Apr 10 00:06:24 2020
             atime: 0x5e6b3965:00000000 -- Fri Mar 13 00:42:29 2020
             mtime: 0x5a21f22f:00000000 -- Fri Dec  1 16:22:07 2017
            crtime: 0x5e6b3965:c27fd8ec -- Fri Mar 13 00:42:29 2020
            Size of extra inode fields: 32
            Extended attributes:
              trusted.lma (24) = 00 00 00 00 00 00 00 00 68 82 00 00 02 00 00 00 83 ac 01 00 00 00 00 00 
              lma: fid=[0x200008268:0x1ac83:0x0] compat=0 incompat=0
              trusted.lov (56)
              trusted.link (1916)
            BLOCKS:
            debugfs -c -R 'stat /ROOT/ARCHIVE/dirvish/filers/gungnir-vol/20200404/tree/vol/builds/usd/0.7.0/e2f93f71e4/lib/python/pxr/Pcp/__init__.py' /dev/md0
            debugfs 1.45.2.wc1 (27-May-2019)
            /dev/md0: catastrophic mode - not reading inode or group bitmaps
            Inode: 820727609   Type: regular    Mode:  0644   Flags: 0x0
            Generation: 3289393243    Version: 0x00000025:9a66a677
            User:   518   Group:    20   Project:     0   Size: 0
            File ACL: 0
            Links: 6   Blockcount: 0
            Fragment:  Address: 0    Number: 0    Size: 0
             ctime: 0x5e95e738:6ca5e29c -- Tue Apr 14 09:39:20 2020
             atime: 0x5d63cb50:00000000 -- Mon Aug 26 05:06:40 2019
             mtime: 0x579ffa2d:00000000 -- Mon Aug  1 18:41:01 2016
            crtime: 0x5d63cb50:820f5640 -- Mon Aug 26 05:06:40 2019
            Size of extra inode fields: 32
            Extended attributes:
              trusted.lma (24) = 00 00 00 00 00 00 00 00 f5 16 00 00 02 00 00 00 15 67 01 00 00 00 00 00 
              lma: fid=[0x2000016f5:0x16715:0x0] compat=0 incompat=0
              trusted.lov (56)
              trusted.link (285)
            BLOCKS:
            cmcl Campbell Mcleay (Inactive) added a comment - A couple of debugfs examples: debugfs -c -R 'stat /ROOT/ARCHIVE/dirvish/filers/vannfs31/20200328/tree/user_data/CRB_RESTORES/IO-82556/CRB/ldev_2d_elements/SCAN/S_ldev_2d_elements_blood_element_squib_large_flat_a_002_s01/2156x1806/s_ldev_2d_elements_blood_element_squib_large_flat_a_002_s01.1117.exr' /dev/md0 debugfs 1.45.2.wc1 (27-May-2019) /dev/md0: catastrophic mode - not reading inode or group bitmaps Inode: 1734097604 Type: regular Mode: 0444 Flags: 0x0 Generation: 294602363 Version: 0x00000025:b9345598 User: 4014 Group: 20 Project: 0 Size: 0 File ACL: 1084109575 Links: 22 Blockcount: 8 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5e901af0:00000000 -- Fri Apr 10 00:06:24 2020 atime: 0x5e6b3965:00000000 -- Fri Mar 13 00:42:29 2020 mtime: 0x5a21f22f:00000000 -- Fri Dec 1 16:22:07 2017 crtime: 0x5e6b3965:c27fd8ec -- Fri Mar 13 00:42:29 2020 Size of extra inode fields: 32 Extended attributes: trusted.lma (24) = 00 00 00 00 00 00 00 00 68 82 00 00 02 00 00 00 83 ac 01 00 00 00 00 00 lma: fid=[0x200008268:0x1ac83:0x0] compat=0 incompat=0 trusted.lov (56) trusted.link (1916) BLOCKS: debugfs -c -R 'stat /ROOT/ARCHIVE/dirvish/filers/gungnir-vol/20200404/tree/vol/builds/usd/0.7.0/e2f93f71e4/lib/python/pxr/Pcp/__init__.py' /dev/md0 debugfs 1.45.2.wc1 (27-May-2019) /dev/md0: catastrophic mode - not reading inode or group bitmaps Inode: 820727609 Type: regular Mode: 0644 Flags: 0x0 Generation: 3289393243 Version: 0x00000025:9a66a677 User: 518 Group: 20 Project: 0 Size: 0 File ACL: 0 Links: 6 Blockcount: 0 Fragment: Address: 0 Number: 0 Size: 0 ctime: 0x5e95e738:6ca5e29c -- Tue Apr 14 09:39:20 2020 atime: 0x5d63cb50:00000000 -- Mon Aug 26 05:06:40 2019 mtime: 0x579ffa2d:00000000 -- Mon Aug 1 18:41:01 2016 crtime: 0x5d63cb50:820f5640 -- Mon Aug 26 05:06:40 2019 Size of extra inode fields: 32 Extended attributes: trusted.lma (24) = 00 00 00 00 00 00 00 00 f5 16 00 00 02 00 00 00 15 67 01 00 00 00 00 00 lma: fid=[0x2000016f5:0x16715:0x0] compat=0 incompat=0 trusted.lov (56) trusted.link (285) BLOCKS:

            Unfortunately:

            emds1 /root # e2fsck -fvy -C 0 /dev/md0
            e2fsck 1.45.2.wc1 (27-May-2019)
            Pass 1: Checking inodes, blocks, and sizes
            Pass 2: Checking directory structure                                           
            Internal error: couldn't find dir_info for 2391487120.
            e2fsck: aborted
            cmcl Campbell Mcleay (Inactive) added a comment - Unfortunately: emds1 /root # e2fsck -fvy -C 0 /dev/md0 e2fsck 1.45.2.wc1 (27-May-2019) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Internal error: couldn't find dir_info for 2391487120. e2fsck: aborted

            Very annoyingly we noticed that we didn't have inode stats turned on in collectd. We've corrected this but only as of today unfortunately.

            cmcl Campbell Mcleay (Inactive) added a comment - - edited Very annoyingly we noticed that we didn't have inode stats turned on in collectd. We've corrected this but only as of today unfortunately.

            People

              adilger Andreas Dilger
              cmcl Campbell Mcleay (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: