[LU-5778] MDS not creating files on OSTs properly Created: 21/Oct/14  Updated: 25/Dec/17  Resolved: 04/Dec/14

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.5.2
Fix Version/s: Lustre 2.7.0

Type: Bug Priority: Critical
Reporter: Tommy Minyard Assignee: Niu Yawei (Inactive)
Resolution: Fixed Votes: 0
Labels: patch
Environment:

CentOS 6.5, kernel 2.6.32-431.17.1.el6_lustre.x86_64


Attachments: File LU-5778.debug_filtered.bz2     File LU-5778_file_create_getstripe.out.gz     File lctl_state.out     File lctl_target_obd.out     File mds5_prealloc.out    
Issue Links:
Duplicate
is duplicated by LU-10414 An unbalancing Lustre fs write the fi... Resolved
Related
is related to LU-5780 Corrupted OST and very long running e... Resolved
is related to LU-5807 enable QOS_DEBUG Resolved
Severity: 3
Rank (Obsolete): 16216

 Description   

One of our Stampede filesystems running Lustre 2.5.2 has an OST offline due to a different problem described in another ticket. Since the OST has been offline, the MDS server crashed with an LBUG and was restarted last Friday. After the restart, the MDS server no longer automatically creates files on any OSTs after the offline OSTs. In our case, OST0010 is offline so now the MDS will only create files on the first 16 OSTs unless we manually specify the stripeoffset in lfs setstripe. This is overloading the the servers with these OSTs while the others are doing nothing. If we deactivate the first 16 OSTs on the MDS, then all files are created with the first stripe on the lowest numbered active OST.

Can you suggest any way to force the MDS to use all the other OSTs through any lctl set_param options? Getting the offline OST back online is not currently an option due to corruption and ongoing e2fsck, it can't be mounted. Manually setting the stripe is also not an option, we need it to work automatically like it should. Could we set some qos options to try and have it balance the OST file creation?



 Comments   
Comment by Andreas Dilger [ 21/Oct/14 ]

Does the MDS think all the OSTs are online? What do lctl get_param lod.*.target_obd and lctl get_param osp.*.state return on the MDS? Do the OSCs have enough precreated objects lctl get_param osp.*.prealloc* on the MDS?

If the the OSTs are space imbalanced and going into QOS mode "lfs df" would report significantly different space usage, but this may be a symptom and not a cause. QOS mode shouldn't really prevent the other OSTs beyond from being used, and in fact should start avoiding the full OSTs. You might try setting on the MDS lctl set_param lod.*.qos_threshold_rr=95 to set the threshold between "least full" and "most full" OSTs to 95% of the filesystem size and essentially disable the QOS mode.

Have you restarted the MDS since the initial failure?

Comment by Tommy Minyard [ 21/Oct/14 ]

Andreas,

The MDS does think all the OSTs are online and active and the state of all the OSTs is FULL, except for the OST that is offline, OST0010 (the offline OST described in LU-5780). Attached is the output from the two commands. I tried the prealloc command but that option is not listed, prealloc_status, prealloc_reserved are reporting 0 for all active OSTs.

We are already starting to see substantial difference in usage, the first 16 OSTs are approaching 90% while the remaining are at 75% of space used.

The MDS was restarted a few minutes ago and it still refuses to create any files on any OST after OST0010 when the stripeoffset -1. I'm pretty sure the problem here is that the MDS won't go above the offline OST index in terms of creating new files due to the OST being offline and inactive and never checking in with the MDS. My suspicion is that the obdindex numbers it chooses to use when stripeoffset=-1 get incremented as OSTs check in, but since OST0010 never reconnects to MDS, it won't go above that one.

Comment by Tommy Minyard [ 21/Oct/14 ]

These files contain the output from the commands Andreas asked us to run

Comment by Tommy Minyard [ 21/Oct/14 ]

So we tried adjusting the qos_threshold_rr to 95 and 100%, but had no effect on distribution of files on OSTs, they always get created on the first 16 OSTs. Also tried reducing the qos_threshold_rr to 5% and still no change in OST file creation distribution. There is something in the MDS file creation algorithm that is preventing it from using any OSTs greater than the offline OST unless offsetstripe is set manually.

THIS IS NOW URGENT and BLOCKING, we cannot put the system back into production in this state and we need some way to ensure that files get distributed across all the OSTs in the filesystem. The first 16 are above 90% full and still growing while most of the rest are below 72% and not changing.

Comment by Niu Yawei (Inactive) [ 22/Oct/14 ]

Could you try to capture debug log on MDT with D_TRACE enabled when creating file? I think that might be useful for me to see why the MDT only pick the first 16 OSTs. (You'd better add a marker before and after the test by "lctl mark")

BTW: what commands did you use to offline the OST? I tried to reproduce it locally, but failed. (deactivate an OST and see if MDT would create objects on all other active OSTs.)

Comment by Oleg Drokin [ 22/Oct/14 ]

hm, stripe offset at -1 means that qos should not really be in play here - as it will try to allocate everywhere still due to requested stripe count.

so if you disable, say, ost with index 3, does your setstripe -1 creates only created on first three OSTs then? (this is incorrect behavior tha we cannot observe ourselves, btw).

Do you have any patches applied on top of lustre 2.5.2?

Comment by Tommy Minyard [ 22/Oct/14 ]

Niu,
The sequence of events to reproduce the problem is as follows. Deactivate the OST in the MDS, then unmount the deactivated OST on the oss (ours can't be mounted due to corruption). then restart the MDS. We are currently in that state, the OST is unmounted and offline, so can't check in with the MDS when the MDS is restarted. It was only after MDS restart that we started to see the file creates only on the first 16 OSTs.

Comment by Tommy Minyard [ 22/Oct/14 ]

Oleg,
We have one cherry-picked patch applied to resolve crashes we were experiencing shortly after our upgrade to 2.5.2. We used patch 0020cc44.diff from LU-5040 to resolve those crashes.

In the current situation, the MDS will create files on any of the first 16 OSTs as expected with stripe_offset of -1 when the OSTs are active in the MDS. If we deactivate those OSTs in the MDS, then the first active OST index is used for all file creates, if we then deactivate that one, it then moves to the next one in the index.

Comment by Andreas Dilger [ 22/Oct/14 ]

I don't want to restate the obvious, but just to be sure that we don't have a simple workaround here, have you actually deactivated the failed OST on the MDS? Something like the following:

mds# lctl --device  %scratch-OST0010-osc-MDT0000 deactivate

This should produce a message in the MDS console logs like:

Lustre: setting import scratch-OST0010_UUID INACTIVE by administrator request

I've done local testing with master and am unable to reproduce this (lfs setstripe -c -1 creates objects on all available stripes when one in the middle is deactivated). I'm going to build and run with 2.5.2 + patch to see if that shows a similar problem (maybe it has already been fixed in later releases).

Comment by Tommy Minyard [ 22/Oct/14 ]

Andreas,
The device is definitely deactivated on the MDS, since the OST is offline and the MDS has been restarted, it could never activate anyway but I have deactivated it again for good measure. The MDS is still choosing to create files on only the first 16 OSTs.

Also, it is not the stripe_count that is the problem, I have been saying it is the stripe_offset set to -1, where it should choose to create a file on the OSTs in a semi-random fashion. Can you try your test again with -c set to 2 (that is our default stripe), -s set to 1MB and -i set to -1?

Comment by Oleg Drokin [ 22/Oct/14 ]

It should be noted that deactivate state as indicated by Andreas is different from disconnected - the state that the system would enter if it could not connect on it's own to an OST (and it'll retry all the time too).

It's interesting what Andreas testing of 2.5.2 will show, I guess.

Comment by Andreas Dilger [ 22/Oct/14 ]

I'm unable to reproduce the problem with 2.5.2 using "-c 2 -i -1". It does imbalance the object allocations somewhat - with OST0002 disabled on the MDS, OST0000 and OST0003 seem to get chosen as the starting OST index much less often than others (of 1000 files, 2000 objects), but it still chooses OSTs beyond the deactivated OST, and the total number of objects allocated on each OST isn't as imbalanced:

OST_idx     #start   #total
   0           36       251
   1          283       319
   3           39       322
   4          175       214
   5          147       322
   6          105       252
   7          215       320

Tommy, can you please enable full debugging via lctl set_param debug=-1 debug_mb=512 on the MDS, and then create maybe 50 files (enough that some of them should be beyond OST0010), then dump the debug log lctl dk /tmp/LU-5779.debug; bzip2 -9 /tmp/LU-5779.debug and attach that log to the ticket here. Getting the lctl get_param osp.*.prealloc* info would also be useful (sorry, the Jira markup turned my * into bold in my first comment).

Comment by Tommy Minyard [ 23/Oct/14 ]

Andreas,
I also tried to reproduce the problem on some test hardware by creating a filesystem with the same exact 2.5.2 version of Lustre installed on our /scratch filesystem and was unsuccessful to reproduce as well. There must be something else going on with our /scratch filesystem, either due to large scale with 348 OSTs or the upgrade from the 2.1.5 version we were running, so i'm going to compare the setup of the filesystem and see if I can find any differences.

In regard to the debug output, we could not wait to put the system back into production so we developed a manual process to distribute files across by setting the stripe offset to a random OST for active user directories. We are cycling the first two active OSTs so that files created in directories where the stripe_offset is still set to -1 get distributed as well. Not efficient or as good performance, but at least it lets us run jobs for users and distribute files across all the OSTs. I can certainly generate the debug output, but afraid it would be polluted with the activity from all the users. That and we had to deactivate the first 16 OSTs since they reached > 93% capacity. We have a maintenance scheduled for next Tuesday and can collect data on a quiet system then. I've included the output from the prealloc information in case it might be useful. I noticed two had -5 as the prealloc_status, those OSTs are in the list of inactive OSTs, which is in the attached file as well. Note that in looking through the prealloc output, found these three sets of messages corresponding to those OSTs:

Oct 22 00:43:29 mds5 kernel: Lustre: setting import scratch-OST001d_UUID INACTIVE by administrator request
Oct 22 00:43:29 mds5 kernel: Lustre: Skipped 8 previous similar messages
Oct 22 00:43:29 mds5 kernel: LustreError: 22062:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST001d-osc-MDT0000: can't precreate: rc = -5
Oct 22 00:43:29 mds5 kernel: LustreError: 22062:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST001d-osc-MDT0000: cannot precreate objects: rc = -5

Oct 22 01:04:06 mds5 kernel: Lustre: setting import scratch-OST0021_UUID INACTIVE by administrator request
Oct 22 01:04:06 mds5 kernel: LustreError: 22070:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST0021-osc-MDT0000: can't precreate: rc = -5
Oct 22 01:04:06 mds5 kernel: LustreError: 22070:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST0021-osc-MDT0000: cannot precreate objects: rc = -5

Oct 22 15:07:21 mds5 kernel: Lustre: setting import scratch-OST0024_UUID INACTIVE by administrator request
Oct 22 15:07:21 mds5 kernel: Lustre: Skipped 5 previous similar messages
Oct 22 15:07:21 mds5 kernel: LustreError: 22084:0:(osp_precreate.c:464:osp_precreate_send()) scratch-OST0026-osc-MDT0000: can't precreate: rc = -5
Oct 22 15:07:21 mds5 kernel: LustreError: 22084:0:(osp_precreate.c:968:osp_precreate_thread()) scratch-OST0026-osc-MDT0000: cannot precreate objects: rc = -5

Comment by Oleg Drokin [ 23/Oct/14 ]

Just a quick note.
Even if there's a lot of noise from other users, you can grab some tracesand since users are likely to allocate files in that time,
we still can gather some useful info.
sadly, t looks like QOS_DEBUG stuff is compiled out by default, but perhaps you can still limit the scope of traces to jsut lod with
echo "lod osp" >/proc/sys/lnet/debug_subsystem
echo -1 >/proc/sys/lnet/debug
lctl dk >/dev/null #to clear the log

on your MDS (please cat those files first and remember the values and restore the content after the gathering of info).

Let it run for some brief time in normal offset -1 mode and definitely do a couple of creations manually too that expose the problem, then gather the log with lctl dk >/tmp/lustre.log. (just run it long enough for your creates to run course)

Also what's your default striping like wrt default stripe count? How about ost pools, do you use that?

Comment by Tommy Minyard [ 24/Oct/14 ]

Thanks Oleg, we're going to try and quiet down the system a bit (the system is a little drained waiting to schedule a 32K core job) and collect the mds trace all with the OSTs active again to see if that can provide some more debug information.

We compared our test filesystem with the current /scratch filesystem and did find one significant difference, the test filesystem has a lwp service running on it that /scratch does not (there are additional lwp entries in lctl dl on MDS and OSS's. The test filesystem went through the same 2.1.5 -> 2.5.2 upgrade process, however, we also ran tunefs.lustre --writeconf on the test filesystem and we did not run tunefs.lustre --writeconf on /scratch in case we encountered a major issue and needed to rollback to previous version. That appears to be the only difference in process of upgrade between our test filesystem and /scratch. I didn't find much information about what lwp does from a quick search and some googling. Not sure if having this lwp service running would impact file layout and creation on the OSTs.

In answer to your question, we use a default stripe count of 2, a size of 1MB and offset -1. We have not enabled ost pools.

Comment by Oleg Drokin [ 24/Oct/14 ]

Hm, thanks for this extra bit of info.
lwp really should only be used for quota and some fld stuff that should not really impact the allocations, certainly not on some OSTs, but lwp stuff does live in OSP codebase so should be caught with the osp mask, except I just checked and it's somehow registered itself on ost mask instead, weird.
You might wish to revise that echo debug subsystem line to echo "osp ost lod" > /..../debug_subsystem

Old servers did not really have LWP config record, but I think we tried to connect anyway (there was even compat issue in the past about that that we since fixed, but we'll need to go back and check how this was implemented).

I think catching that bit of debug should be useful just in case.

Comment by Tommy Minyard [ 24/Oct/14 ]

Attached is the debug log with the data filtered for lov and ost subsystems. There is no osp subsystem on our current /scratch mds. I cleared the logs prior to running the test, set a mark and then 500 files were created on the filesystem, hopefully you can find it in the attached log.

Comment by Oleg Drokin [ 26/Oct/14 ]

Looking at the log:
It contains 157 create requests in it (I would have thought it has reached internal log sizing limit, but with the default at 1M per cpu, and the total log size you have at 1.1M it's certainly not the case).

Other observations, in the log it says it has 348 OST of which 248 are not really connected (check status returns -107).
First 47 OSTs in the list are connected (with only a one in the middle of these 47 being down, namely OST 16 ).
OSTs 47 to 292 are down, then OSTs 293, 341, 342 are also down.

Of the 157 creates only 16 went into the alloc_qos path and had any chance of random allocation. The other 141 took the alloc_specific path that is chosen when starting offset is below number of osts in the system (in other words it was set not to -1).
Did you have other processes in the system creating files with forced striping while this was run?
Sadly our level of debug does not allow us to see what osts were chosen in the end in the alloc_qos case, but if you have retained those 500 files, it might be interesting to see lfs getstripe output on them.

Comment by Niu Yawei (Inactive) [ 27/Oct/14 ]

I'm wondering why the QOS_DEBUG() messages in lod_alloc_qos() wasn't printed?

Comment by Oleg Drokin [ 27/Oct/14 ]

QOS_DEBUG is compiled out by default which is really unfortunate:

#if 0
#define QOS_DEBUG(fmt, ...)     CDEBUG(D_OTHER, fmt, ## __VA_ARGS__)
#define QOS_CONSOLE(fmt, ...)   LCONSOLE(D_OTHER, fmt, ## __VA_ARGS__)
#else
#define QOS_DEBUG(fmt, ...)
#define QOS_CONSOLE(fmt, ...)
#endif

Whoever thought that was great idea was not right.

Niu, can you please open a separate ticket for this and perhaps add a patch?
Also d_other might not be great mask for it either, so please see if something else could be better here.

Comment by Tommy Minyard [ 27/Oct/14 ]

Oleg,
In regard to the file creation, we did have some directories set with a manual stripe offset and not -1, so the stripe creation was being forced onto specific OSTs. I ran an lfs getstripe on the 500 created and they all landed on objidx less than 16. I've attached the output from that file create. As you noted, a large number of OSTs were inactive due us trying to distribute files across the other OSTs, but in this case, all files still had their stripes on the first 16 OSTs.

This ticket is no longer critical for us since we were able to recreate OST0010 over the weekend and once it was mounted and checked in with the mds, then file creation started across all the active OSTs once again like it should. This specific issue must have been due to the OST being offline and not able to check in with the MDS when the MDS had to be restarted after an LBUG crash. We certainly do not get into this state often as usually all OSTs are available and not in a state that they can't be mounted.

Comment by Aurelien Degremont (Inactive) [ 06/Nov/14 ]

We faced the same issue at CEA. We analyze the problem and tracked it down to lod_qos_statfs_update(). Indead, in this function, each OST is checked successively. If a OST has active=0, lod_statfs_and_check() will return ENOTCONN and the for-loop will break. Following OSTs won't be checked. Their metadata will not be updated (zero) and they will not be used when allocating file objects.

        for (i = 0; i < osts->op_count; i++) {
                idx = osts->op_array[i];
                avail = OST_TGT(lod,idx)->ltd_statfs.os_bavail;
                rc = lod_statfs_and_check(env, lod, idx,
                                          &OST_TGT(lod,idx)->ltd_statfs);
                if (rc)
                        break;
                if (OST_TGT(lod,idx)->ltd_statfs.os_bavail != avail)
                        /* recalculate weigths */
                        lod->lod_qos.lq_dirty = 1;
        }

A simple workaround for this bug is to deactivate QOS allocation setting qos_threshold_rr to 100 and so going back to a simple round-robin.

A fix seems to simply replace the break with continue

Comment by Tommy Minyard [ 06/Nov/14 ]

Thanks for the workaround Aurelien, will keep this in mind if we encounter this situation again. Do you know if this problem is still in the main branch of the source tree or is it just for the 2.5.x versions like we are running? If so, should be fixed a simple fix as you noted.

Niu, can you take a look and confirm?

Comment by Alex Zhuravlev [ 06/Nov/14 ]

iirc, originally osp_statfs() wasn't returning an error, instead it claimed "empty" OST if the connection is down.

Comment by Aurelien Degremont (Inactive) [ 06/Nov/14 ]

As far as I can see in current master branch, the problem is still there.

Comment by Niu Yawei (Inactive) [ 07/Nov/14 ]

Niu, can you take a look and confirm?

I agree with that 'break' here is inappropriate, but I don't see why this can leads to the result of always allocating objects on the OSTs before the deactivated one.

In the lod_alloc_qos(), statfs will be performed on all OSPs again:

        /* Find all the OSTs that are valid stripe candidates */
        for (i = 0; i < osts->op_count; i++) {
                if (!cfs_bitmap_check(m->lod_ost_bitmap, osts->op_array[i]))
                        continue;

                rc = lod_statfs_and_check(env, m, osts->op_array[i], sfs);
                if (rc) {
                        /* this OSP doesn't feel well */
                        continue;
                }
Comment by Antoine Percher [ 07/Nov/14 ]

Because in this case lod_statfs_and_check() put the result on sfs who are &lod_env_info(env)->lti_osfs
and not in &OST_TGT(lod,idx)->ltd_statfs
The question could be why we have 2 structures to put same ost counter ?

Comment by Niu Yawei (Inactive) [ 07/Nov/14 ]

Because in this case lod_statfs_and_check() put the result on sfs who are &lod_env_info(env)->lti_osfs
and not in &OST_TGT(lod,idx)->ltd_statfs

Indeed, I think you are right.

The question could be why we have 2 structures to put same ost counter ?

I think it because we don't want to get lq_rw_sem in lod_alloc_qos(), that would introduce lots of contention.

Hi, Aurelien
Would you mind to post a patch to fix this? (replace the 'break' with 'continue' in lod_qos_statfs_update()). Thanks.

Comment by Aurelien Degremont (Inactive) [ 07/Nov/14 ]

I will try to do that

Comment by Aurelien Degremont (Inactive) [ 07/Nov/14 ]

I've pushed: http://review.whamcloud.com/12617

Comment by Aurelien Degremont (Inactive) [ 12/Nov/14 ]

Patch for b2_5: http://review.whamcloud.com/12685

Comment by Aurelien Degremont (Inactive) [ 14/Nov/14 ]

I've finally found time for a reproducer.

cd /usr/lib64/lustre/tests
OSTCOUNT=5 MGSDEV=/tmp/lustre-mgs ./llmount.sh

# Fill the first OST to unbalance disk space and activate qos algorithm
lfs setstripe -c1 -i0 /mnt/lustre/fill.0
dd if=/dev/zero of=/mnt/lustre/fill.0 bs=1M count=100

sync
lfs df -h /mnt/lustre

# Stop OST #2 to avoid MDS getting stats from it
umount /mnt/ost3
# Deactivate this OST
lctl conf_param lustre-OST0002.osc.active=0

# Re-start MDT to clear cached data
umount /mnt/mds1
mount -t lustre /tmp/lustre-mdt1 /mnt/mds1 -o loop

# Now create a lot of files and check how they are stripped
for i in {1..50}; do lfs setstripe -c1 /mnt/lustre/file.$i; done
lfs getstripe /mnt/lustre/file.* | awk '/0x/ {print $1}' | sort | uniq -c

Files are striped only on OST #0, #1 and nothing on #3 and #4, even if #0 is almost full and #1, #3 and #4 are empty. (#2 is deactivated and stopped)

Maybe I will find time to create a test case after SC14.

Comment by Gerrit Updater [ 04/Dec/14 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/12685/
Subject: LU-5778 lod: Fix lod_qos_statfs_update()
Project: fs/lustre-release
Branch: b2_5
Current Patch Set:
Commit: cebb8fd03635f2f4e8f17c3a902eeba8008b07c4

Comment by Peter Jones [ 04/Dec/14 ]

Fix landed for 2.5.4 and 2.7

Generated at Sat Feb 10 01:54:26 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.