Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-5778

MDS not creating files on OSTs properly

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • Lustre 2.7.0
    • Lustre 2.5.2
    • CentOS 6.5, kernel 2.6.32-431.17.1.el6_lustre.x86_64
    • 3
    • 16216

    Description

      One of our Stampede filesystems running Lustre 2.5.2 has an OST offline due to a different problem described in another ticket. Since the OST has been offline, the MDS server crashed with an LBUG and was restarted last Friday. After the restart, the MDS server no longer automatically creates files on any OSTs after the offline OSTs. In our case, OST0010 is offline so now the MDS will only create files on the first 16 OSTs unless we manually specify the stripeoffset in lfs setstripe. This is overloading the the servers with these OSTs while the others are doing nothing. If we deactivate the first 16 OSTs on the MDS, then all files are created with the first stripe on the lowest numbered active OST.

      Can you suggest any way to force the MDS to use all the other OSTs through any lctl set_param options? Getting the offline OST back online is not currently an option due to corruption and ongoing e2fsck, it can't be mounted. Manually setting the stripe is also not an option, we need it to work automatically like it should. Could we set some qos options to try and have it balance the OST file creation?

      Attachments

        1. lctl_state.out
          44 kB
        2. lctl_target_obd.out
          11 kB
        3. LU-5778_file_create_getstripe.out.gz
          12 kB
        4. LU-5778.debug_filtered.bz2
          30 kB
        5. mds5_prealloc.out
          128 kB

        Issue Links

          Activity

            [LU-5778] MDS not creating files on OSTs properly

            We faced the same issue at CEA. We analyze the problem and tracked it down to lod_qos_statfs_update(). Indead, in this function, each OST is checked successively. If a OST has active=0, lod_statfs_and_check() will return ENOTCONN and the for-loop will break. Following OSTs won't be checked. Their metadata will not be updated (zero) and they will not be used when allocating file objects.

                    for (i = 0; i < osts->op_count; i++) {
                            idx = osts->op_array[i];
                            avail = OST_TGT(lod,idx)->ltd_statfs.os_bavail;
                            rc = lod_statfs_and_check(env, lod, idx,
                                                      &OST_TGT(lod,idx)->ltd_statfs);
                            if (rc)
                                    break;
                            if (OST_TGT(lod,idx)->ltd_statfs.os_bavail != avail)
                                    /* recalculate weigths */
                                    lod->lod_qos.lq_dirty = 1;
                    }
            

            A simple workaround for this bug is to deactivate QOS allocation setting qos_threshold_rr to 100 and so going back to a simple round-robin.

            A fix seems to simply replace the break with continue

            adegremont Aurelien Degremont (Inactive) added a comment - We faced the same issue at CEA. We analyze the problem and tracked it down to lod_qos_statfs_update() . Indead, in this function, each OST is checked successively. If a OST has active=0, lod_statfs_and_check() will return ENOTCONN and the for-loop will break. Following OSTs won't be checked. Their metadata will not be updated (zero) and they will not be used when allocating file objects. for (i = 0; i < osts->op_count; i++) { idx = osts->op_array[i]; avail = OST_TGT(lod,idx)->ltd_statfs.os_bavail; rc = lod_statfs_and_check(env, lod, idx, &OST_TGT(lod,idx)->ltd_statfs); if (rc) break ; if (OST_TGT(lod,idx)->ltd_statfs.os_bavail != avail) /* recalculate weigths */ lod->lod_qos.lq_dirty = 1; } A simple workaround for this bug is to deactivate QOS allocation setting qos_threshold_rr to 100 and so going back to a simple round-robin. A fix seems to simply replace the break with continue

            Oleg,
            In regard to the file creation, we did have some directories set with a manual stripe offset and not -1, so the stripe creation was being forced onto specific OSTs. I ran an lfs getstripe on the 500 created and they all landed on objidx less than 16. I've attached the output from that file create. As you noted, a large number of OSTs were inactive due us trying to distribute files across the other OSTs, but in this case, all files still had their stripes on the first 16 OSTs.

            This ticket is no longer critical for us since we were able to recreate OST0010 over the weekend and once it was mounted and checked in with the mds, then file creation started across all the active OSTs once again like it should. This specific issue must have been due to the OST being offline and not able to check in with the MDS when the MDS had to be restarted after an LBUG crash. We certainly do not get into this state often as usually all OSTs are available and not in a state that they can't be mounted.

            minyard Tommy Minyard added a comment - Oleg, In regard to the file creation, we did have some directories set with a manual stripe offset and not -1, so the stripe creation was being forced onto specific OSTs. I ran an lfs getstripe on the 500 created and they all landed on objidx less than 16. I've attached the output from that file create. As you noted, a large number of OSTs were inactive due us trying to distribute files across the other OSTs, but in this case, all files still had their stripes on the first 16 OSTs. This ticket is no longer critical for us since we were able to recreate OST0010 over the weekend and once it was mounted and checked in with the mds, then file creation started across all the active OSTs once again like it should. This specific issue must have been due to the OST being offline and not able to check in with the MDS when the MDS had to be restarted after an LBUG crash. We certainly do not get into this state often as usually all OSTs are available and not in a state that they can't be mounted.
            green Oleg Drokin added a comment -

            QOS_DEBUG is compiled out by default which is really unfortunate:

            #if 0
            #define QOS_DEBUG(fmt, ...)     CDEBUG(D_OTHER, fmt, ## __VA_ARGS__)
            #define QOS_CONSOLE(fmt, ...)   LCONSOLE(D_OTHER, fmt, ## __VA_ARGS__)
            #else
            #define QOS_DEBUG(fmt, ...)
            #define QOS_CONSOLE(fmt, ...)
            #endif
            

            Whoever thought that was great idea was not right.

            Niu, can you please open a separate ticket for this and perhaps add a patch?
            Also d_other might not be great mask for it either, so please see if something else could be better here.

            green Oleg Drokin added a comment - QOS_DEBUG is compiled out by default which is really unfortunate: #if 0 #define QOS_DEBUG(fmt, ...) CDEBUG(D_OTHER, fmt, ## __VA_ARGS__) #define QOS_CONSOLE(fmt, ...) LCONSOLE(D_OTHER, fmt, ## __VA_ARGS__) #else #define QOS_DEBUG(fmt, ...) #define QOS_CONSOLE(fmt, ...) #endif Whoever thought that was great idea was not right. Niu, can you please open a separate ticket for this and perhaps add a patch? Also d_other might not be great mask for it either, so please see if something else could be better here.

            I'm wondering why the QOS_DEBUG() messages in lod_alloc_qos() wasn't printed?

            niu Niu Yawei (Inactive) added a comment - I'm wondering why the QOS_DEBUG() messages in lod_alloc_qos() wasn't printed?
            green Oleg Drokin added a comment - - edited

            Looking at the log:
            It contains 157 create requests in it (I would have thought it has reached internal log sizing limit, but with the default at 1M per cpu, and the total log size you have at 1.1M it's certainly not the case).

            Other observations, in the log it says it has 348 OST of which 248 are not really connected (check status returns -107).
            First 47 OSTs in the list are connected (with only a one in the middle of these 47 being down, namely OST 16 ).
            OSTs 47 to 292 are down, then OSTs 293, 341, 342 are also down.

            Of the 157 creates only 16 went into the alloc_qos path and had any chance of random allocation. The other 141 took the alloc_specific path that is chosen when starting offset is below number of osts in the system (in other words it was set not to -1).
            Did you have other processes in the system creating files with forced striping while this was run?
            Sadly our level of debug does not allow us to see what osts were chosen in the end in the alloc_qos case, but if you have retained those 500 files, it might be interesting to see lfs getstripe output on them.

            green Oleg Drokin added a comment - - edited Looking at the log: It contains 157 create requests in it (I would have thought it has reached internal log sizing limit, but with the default at 1M per cpu, and the total log size you have at 1.1M it's certainly not the case). Other observations, in the log it says it has 348 OST of which 248 are not really connected (check status returns -107). First 47 OSTs in the list are connected (with only a one in the middle of these 47 being down, namely OST 16 ). OSTs 47 to 292 are down, then OSTs 293, 341, 342 are also down. Of the 157 creates only 16 went into the alloc_qos path and had any chance of random allocation. The other 141 took the alloc_specific path that is chosen when starting offset is below number of osts in the system (in other words it was set not to -1). Did you have other processes in the system creating files with forced striping while this was run? Sadly our level of debug does not allow us to see what osts were chosen in the end in the alloc_qos case, but if you have retained those 500 files, it might be interesting to see lfs getstripe output on them.

            Attached is the debug log with the data filtered for lov and ost subsystems. There is no osp subsystem on our current /scratch mds. I cleared the logs prior to running the test, set a mark and then 500 files were created on the filesystem, hopefully you can find it in the attached log.

            minyard Tommy Minyard added a comment - Attached is the debug log with the data filtered for lov and ost subsystems. There is no osp subsystem on our current /scratch mds. I cleared the logs prior to running the test, set a mark and then 500 files were created on the filesystem, hopefully you can find it in the attached log.
            green Oleg Drokin added a comment -

            Hm, thanks for this extra bit of info.
            lwp really should only be used for quota and some fld stuff that should not really impact the allocations, certainly not on some OSTs, but lwp stuff does live in OSP codebase so should be caught with the osp mask, except I just checked and it's somehow registered itself on ost mask instead, weird.
            You might wish to revise that echo debug subsystem line to echo "osp ost lod" > /..../debug_subsystem

            Old servers did not really have LWP config record, but I think we tried to connect anyway (there was even compat issue in the past about that that we since fixed, but we'll need to go back and check how this was implemented).

            I think catching that bit of debug should be useful just in case.

            green Oleg Drokin added a comment - Hm, thanks for this extra bit of info. lwp really should only be used for quota and some fld stuff that should not really impact the allocations, certainly not on some OSTs, but lwp stuff does live in OSP codebase so should be caught with the osp mask, except I just checked and it's somehow registered itself on ost mask instead, weird. You might wish to revise that echo debug subsystem line to echo "osp ost lod" > /..../debug_subsystem Old servers did not really have LWP config record, but I think we tried to connect anyway (there was even compat issue in the past about that that we since fixed, but we'll need to go back and check how this was implemented). I think catching that bit of debug should be useful just in case.

            Thanks Oleg, we're going to try and quiet down the system a bit (the system is a little drained waiting to schedule a 32K core job) and collect the mds trace all with the OSTs active again to see if that can provide some more debug information.

            We compared our test filesystem with the current /scratch filesystem and did find one significant difference, the test filesystem has a lwp service running on it that /scratch does not (there are additional lwp entries in lctl dl on MDS and OSS's. The test filesystem went through the same 2.1.5 -> 2.5.2 upgrade process, however, we also ran tunefs.lustre --writeconf on the test filesystem and we did not run tunefs.lustre --writeconf on /scratch in case we encountered a major issue and needed to rollback to previous version. That appears to be the only difference in process of upgrade between our test filesystem and /scratch. I didn't find much information about what lwp does from a quick search and some googling. Not sure if having this lwp service running would impact file layout and creation on the OSTs.

            In answer to your question, we use a default stripe count of 2, a size of 1MB and offset -1. We have not enabled ost pools.

            minyard Tommy Minyard added a comment - Thanks Oleg, we're going to try and quiet down the system a bit (the system is a little drained waiting to schedule a 32K core job) and collect the mds trace all with the OSTs active again to see if that can provide some more debug information. We compared our test filesystem with the current /scratch filesystem and did find one significant difference, the test filesystem has a lwp service running on it that /scratch does not (there are additional lwp entries in lctl dl on MDS and OSS's. The test filesystem went through the same 2.1.5 -> 2.5.2 upgrade process, however, we also ran tunefs.lustre --writeconf on the test filesystem and we did not run tunefs.lustre --writeconf on /scratch in case we encountered a major issue and needed to rollback to previous version. That appears to be the only difference in process of upgrade between our test filesystem and /scratch. I didn't find much information about what lwp does from a quick search and some googling. Not sure if having this lwp service running would impact file layout and creation on the OSTs. In answer to your question, we use a default stripe count of 2, a size of 1MB and offset -1. We have not enabled ost pools.
            green Oleg Drokin added a comment -

            Just a quick note.
            Even if there's a lot of noise from other users, you can grab some tracesand since users are likely to allocate files in that time,
            we still can gather some useful info.
            sadly, t looks like QOS_DEBUG stuff is compiled out by default, but perhaps you can still limit the scope of traces to jsut lod with
            echo "lod osp" >/proc/sys/lnet/debug_subsystem
            echo -1 >/proc/sys/lnet/debug
            lctl dk >/dev/null #to clear the log

            on your MDS (please cat those files first and remember the values and restore the content after the gathering of info).

            Let it run for some brief time in normal offset -1 mode and definitely do a couple of creations manually too that expose the problem, then gather the log with lctl dk >/tmp/lustre.log. (just run it long enough for your creates to run course)

            Also what's your default striping like wrt default stripe count? How about ost pools, do you use that?

            green Oleg Drokin added a comment - Just a quick note. Even if there's a lot of noise from other users, you can grab some tracesand since users are likely to allocate files in that time, we still can gather some useful info. sadly, t looks like QOS_DEBUG stuff is compiled out by default, but perhaps you can still limit the scope of traces to jsut lod with echo "lod osp" >/proc/sys/lnet/debug_subsystem echo -1 >/proc/sys/lnet/debug lctl dk >/dev/null #to clear the log on your MDS (please cat those files first and remember the values and restore the content after the gathering of info). Let it run for some brief time in normal offset -1 mode and definitely do a couple of creations manually too that expose the problem, then gather the log with lctl dk >/tmp/lustre.log. (just run it long enough for your creates to run course) Also what's your default striping like wrt default stripe count? How about ost pools, do you use that?

            Andreas,
            I also tried to reproduce the problem on some test hardware by creating a filesystem with the same exact 2.5.2 version of Lustre installed on our /scratch filesystem and was unsuccessful to reproduce as well. There must be something else going on with our /scratch filesystem, either due to large scale with 348 OSTs or the upgrade from the 2.1.5 version we were running, so i'm going to compare the setup of the filesystem and see if I can find any differences.

            In regard to the debug output, we could not wait to put the system back into production so we developed a manual process to distribute files across by setting the stripe offset to a random OST for active user directories. We are cycling the first two active OSTs so that files created in directories where the stripe_offset is still set to -1 get distributed as well. Not efficient or as good performance, but at least it lets us run jobs for users and distribute files across all the OSTs. I can certainly generate the debug output, but afraid it would be polluted with the activity from all the users. That and we had to deactivate the first 16 OSTs since they reached > 93% capacity. We have a maintenance scheduled for next Tuesday and can collect data on a quiet system then. I've included the output from the prealloc information in case it might be useful. I noticed two had -5 as the prealloc_status, those OSTs are in the list of inactive OSTs, which is in the attached file as well. Note that in looking through the prealloc output, found these three sets of messages corresponding to those OSTs:

            Oct 22 00:43:29 mds5 kernel: Lustre: setting import scratch-OST001d_UUID INACTIVE by administrator request
            Oct 22 00:43:29 mds5 kernel: Lustre: Skipped 8 previous similar messages
            Oct 22 00:43:29 mds5 kernel: LustreError: 22062:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST001d-osc-MDT0000: can't precreate: rc = -5
            Oct 22 00:43:29 mds5 kernel: LustreError: 22062:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST001d-osc-MDT0000: cannot precreate objects: rc = -5

            Oct 22 01:04:06 mds5 kernel: Lustre: setting import scratch-OST0021_UUID INACTIVE by administrator request
            Oct 22 01:04:06 mds5 kernel: LustreError: 22070:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST0021-osc-MDT0000: can't precreate: rc = -5
            Oct 22 01:04:06 mds5 kernel: LustreError: 22070:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST0021-osc-MDT0000: cannot precreate objects: rc = -5

            Oct 22 15:07:21 mds5 kernel: Lustre: setting import scratch-OST0024_UUID INACTIVE by administrator request
            Oct 22 15:07:21 mds5 kernel: Lustre: Skipped 5 previous similar messages
            Oct 22 15:07:21 mds5 kernel: LustreError: 22084:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST0026-osc-MDT0000: can't precreate: rc = -5
            Oct 22 15:07:21 mds5 kernel: LustreError: 22084:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST0026-osc-MDT0000: cannot precreate objects: rc = -5

            minyard Tommy Minyard added a comment - Andreas, I also tried to reproduce the problem on some test hardware by creating a filesystem with the same exact 2.5.2 version of Lustre installed on our /scratch filesystem and was unsuccessful to reproduce as well. There must be something else going on with our /scratch filesystem, either due to large scale with 348 OSTs or the upgrade from the 2.1.5 version we were running, so i'm going to compare the setup of the filesystem and see if I can find any differences. In regard to the debug output, we could not wait to put the system back into production so we developed a manual process to distribute files across by setting the stripe offset to a random OST for active user directories. We are cycling the first two active OSTs so that files created in directories where the stripe_offset is still set to -1 get distributed as well. Not efficient or as good performance, but at least it lets us run jobs for users and distribute files across all the OSTs. I can certainly generate the debug output, but afraid it would be polluted with the activity from all the users. That and we had to deactivate the first 16 OSTs since they reached > 93% capacity. We have a maintenance scheduled for next Tuesday and can collect data on a quiet system then. I've included the output from the prealloc information in case it might be useful. I noticed two had -5 as the prealloc_status, those OSTs are in the list of inactive OSTs, which is in the attached file as well. Note that in looking through the prealloc output, found these three sets of messages corresponding to those OSTs: Oct 22 00:43:29 mds5 kernel: Lustre: setting import scratch-OST001d_UUID INACTIVE by administrator request Oct 22 00:43:29 mds5 kernel: Lustre: Skipped 8 previous similar messages Oct 22 00:43:29 mds5 kernel: LustreError: 22062:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST001d-osc-MDT0000: can't precreate: rc = -5 Oct 22 00:43:29 mds5 kernel: LustreError: 22062:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST001d-osc-MDT0000: cannot precreate objects: rc = -5 Oct 22 01:04:06 mds5 kernel: Lustre: setting import scratch-OST0021_UUID INACTIVE by administrator request Oct 22 01:04:06 mds5 kernel: LustreError: 22070:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST0021-osc-MDT0000: can't precreate: rc = -5 Oct 22 01:04:06 mds5 kernel: LustreError: 22070:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST0021-osc-MDT0000: cannot precreate objects: rc = -5 Oct 22 15:07:21 mds5 kernel: Lustre: setting import scratch-OST0024_UUID INACTIVE by administrator request Oct 22 15:07:21 mds5 kernel: Lustre: Skipped 5 previous similar messages Oct 22 15:07:21 mds5 kernel: LustreError: 22084:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST0026-osc-MDT0000: can't precreate: rc = -5 Oct 22 15:07:21 mds5 kernel: LustreError: 22084:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST0026-osc-MDT0000: cannot precreate objects: rc = -5

            I'm unable to reproduce the problem with 2.5.2 using "-c 2 -i -1". It does imbalance the object allocations somewhat - with OST0002 disabled on the MDS, OST0000 and OST0003 seem to get chosen as the starting OST index much less often than others (of 1000 files, 2000 objects), but it still chooses OSTs beyond the deactivated OST, and the total number of objects allocated on each OST isn't as imbalanced:

            OST_idx     #start   #total
               0           36       251
               1          283       319
               3           39       322
               4          175       214
               5          147       322
               6          105       252
               7          215       320
            

            Tommy, can you please enable full debugging via lctl set_param debug=-1 debug_mb=512 on the MDS, and then create maybe 50 files (enough that some of them should be beyond OST0010), then dump the debug log lctl dk /tmp/LU-5779.debug; bzip2 -9 /tmp/LU-5779.debug and attach that log to the ticket here. Getting the lctl get_param osp.*.prealloc* info would also be useful (sorry, the Jira markup turned my * into bold in my first comment).

            adilger Andreas Dilger added a comment - I'm unable to reproduce the problem with 2.5.2 using "-c 2 -i -1". It does imbalance the object allocations somewhat - with OST0002 disabled on the MDS, OST0000 and OST0003 seem to get chosen as the starting OST index much less often than others (of 1000 files, 2000 objects), but it still chooses OSTs beyond the deactivated OST, and the total number of objects allocated on each OST isn't as imbalanced: OST_idx #start #total 0 36 251 1 283 319 3 39 322 4 175 214 5 147 322 6 105 252 7 215 320 Tommy, can you please enable full debugging via lctl set_param debug=-1 debug_mb=512 on the MDS, and then create maybe 50 files (enough that some of them should be beyond OST0010), then dump the debug log lctl dk /tmp/ LU-5779 .debug; bzip2 -9 /tmp/ LU-5779 .debug and attach that log to the ticket here. Getting the lctl get_param osp.*.prealloc* info would also be useful (sorry, the Jira markup turned my * into bold in my first comment).

            People

              niu Niu Yawei (Inactive)
              minyard Tommy Minyard
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: