[LU-11663] corrupt data after page-unaligned write with zfs backend lustre 2.10 Created: 14/Nov/18  Updated: 17/Dec/18  Resolved: 30/Nov/18

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.12.0, Lustre 2.10.5, Lustre 2.10.6
Fix Version/s: Lustre 2.12.0, Lustre 2.10.6

Type: Bug Priority: Blocker
Reporter: Olaf Faaland Assignee: Alex Zhuravlev
Resolution: Fixed Votes: 0
Labels: llnl
Environment:

client catalyst: lustre-2.8.2_5.chaos-1.ch6.x86_64
server: porter lustre-2.10.5_2.chaos-3.ch6.x86_64
kernel-3.10.0-862.14.4.1chaos.ch6.x86_64 (RHEL 7.5 derivative)


Attachments: File for-upload-lu-11663.tar.bz2     File lu-11663-2018-11-26.tgz    
Issue Links:
Related
is related to LU-11697 BAD WRITE CHECKSUM with t10ip4K and t... Resolved
is related to LU-11729 ARM: sanity test_810: BAD WRITE CHECK... Resolved
is related to LU-10683 write checksum errors Resolved
is related to LU-11798 cur_grant goes to 0 and never increas... Resolved
Severity: 2
Rank (Obsolete): 9223372036854775807

 Description   

The apparent contents of a file change after dropping caches:

[root@catalyst110:toss-4371.umm1t]# ./proc6.olaf
+ dd if=/dev/urandom of=testfile20K.in bs=10240 count=2
2+0 records in
2+0 records out
20480 bytes (20 kB) copied, 0.024565 s, 834 kB/s
+ dd if=testfile20K.in of=testfile20K.out bs=10240 count=2
2+0 records in
2+0 records out
20480 bytes (20 kB) copied, 0.0451045 s, 454 kB/s
++ md5sum testfile20K.out
+ original_md5sum='1060a4c01a415d7c38bdd00dcf09dd22  testfile20K.out'
+ echo 3
++ md5sum testfile20K.out
+ echo after drop_caches 1060a4c01a415d7c38bdd00dcf09dd22 testfile20K.out 717122f4dd25f2e75834a8b21c79ce50 testfile20K.out
after drop_caches 1060a4c01a415d7c38bdd00dcf09dd22 testfile20K.out 717122f4dd25f2e75834a8b21c79ce50 testfile20K.out                                                                        

[root@catalyst110:toss-4371.umm1t]# cat proc6.olaf
#!/bin/bash

set -x

dd if=/dev/urandom of=testfile.in bs=10240 count=2
dd if=testfile.in of=testfile.out bs=10240 count=2

#dd if=/dev/urandom of=testfile.in bs=102400 count=2
#dd if=testfile.in of=testfile.out bs=102400 count=2
original_md5sum=$(md5sum testfile.out)
echo 3 >/proc/sys/vm/drop_caches

echo after drop_caches $original_md5sum $(md5sum testfile.out)


 Comments   
Comment by Olaf Faaland [ 14/Nov/18 ]

Console log reports no errors.  Only that one lustre file system is mounted, and there are no issues with it or the network in between.  No servers were in recovery, starting, or stopping at the time of the example above.

With this file, the symptoms are 100% reproducible, so I can gather debug logs as required.   What would you like - rpctrace? vfstrace?  dlmtrace?

Comment by Olaf Faaland [ 14/Nov/18 ]

This is a production file system.

Comment by Andreas Dilger [ 14/Nov/18 ]

My first suggestion would be to check the strace output of "cp" to see if it is over-optimizing the file copy based on the stat() or FIEMAP output? There was a bug in cp that it wouldn't try to copy data if stat reported blocks = 0. We fixed that in Lustre by always reporting blocks = 1 if the file had dirty data, but maybe that patch is not in your version?

Next might be that the peer client doing the cp is not getting any block count from the glimpse request, so the workaround was working on the local node that originally wrote the file but not the other clients. We should be returning some block count estimate from the original writing client to the peer doing the cp, but it is possible that is missing/broken?

Comment by Andreas Dilger [ 14/Nov/18 ]

Also, what version of coreutils are you running?

Comment by Olaf Faaland [ 14/Nov/18 ]

I assume this was the product of a spell checker:

Also, what version of Corey told are you running?

But if not, tell me what it is . I added the kernel version to the environment section above.

Comment by Olaf Faaland [ 14/Nov/18 ]

more versions:

tar-1.26-34.el7.x86_64
coreutils-8.22-21.el7.x86_64
(aka Corey told)

Comment by Olaf Faaland [ 14/Nov/18 ]

It looks like cp in the description is a red herring, I'll update the description with a simpler reproducer. tar and md5sum are enough to see the issue, but it does take two nodes. tar does not issue a fiemap according to strace.

Comment by Olaf Faaland [ 14/Nov/18 ]

It depends on the block size used when writing. bs=10240 triggers the problem and the checksums do not match, but bs=102400 does not trigger the problem.

Comment by Oleg Drokin [ 14/Nov/18 ]

I see you are not calculatign the checksum of the out file before drop caches? why?

Also I do wonder if you insert a sync between the two dds, would it make any difference? (this is mostly playing into the same idea Andreas has about the fiemap). Also please consider capturing both files so we can examine what's different about them.

Comment by Olaf Faaland [ 14/Nov/18 ]

Before the drop caches, the md5sum of testfile.in and testfile.out are the same. It's not in that particular example, but it's been verified. We've tried the sync you proposed, and that did not alter the behavior.

I have altered my test to create the file on NFS originally, which is not exhibiting this behavior. I checksum it there, and create a hexdump of it, and then use dd to copy its data to a file on the lustre 2.10 file system, and hexdump and checksum it there.

Before the drop_caches, the md5sum and hexdump match that of the version on NFS. After the drop caches, they do not.

Looking at the diffs of the hexdumps, the differences are not the same WRT location in the file or in the contents. Sometimes the damaged file has all 0's, sometimes it has visible structure, and sometimes the new data does not have visible structure.

Comment by Olaf Faaland [ 14/Nov/18 ]

My earlier comment is not quite right. There is a pattern to the location when I test with the same file size and read/write request size. Using bs=10240 and count=2, the first difference always appears at offset 0x2800 or 10240 (ie at the boundary of the requests)

The data that is found at offset 0x2800 is different every time I issue dd to create a new file, but that offset is where the difference starts.

Comment by Olaf Faaland [ 14/Nov/18 ]

I have altered my test to create the file on NFS originally, which is not exhibiting this behavior. I checksum it there, and create a hexdump of it, and then use dd to copy its data to a file on the lustre 2.10 file system, and hexdump and checksum it there.

Before the drop_caches, the md5sum and hexdump match that of the version on NFS. After the drop caches, they do not.

This means that the data cached when the writes occurred is the good data, but that was sent back by the OST is bad, correct? I'll go look at the data on the OST to see what it looks like.

Comment by Oleg Drokin [ 15/Nov/18 ]

btw the script you are providing appears to be single node, but in the comment you say this requires two nodes. What's the second node for?

Comment by Olaf Faaland [ 15/Nov/18 ]

btw the script you are providing appears to be single node, but in the comment you say this requires two nodes. What's the second node for?

Originally we reproduced the problem using two nodes; one to write the data and another to read and checksum it, to detect the problem.   Once we started dropping caches, we did not need a second node.

Comment by Olaf Faaland [ 15/Nov/18 ]

I haven't found the objects on disk, going back to that in a minute. But from the client, with a sample 100k test file, copies made via dd with bs=10240 always have damage in the following extents (offsets, in hex). The actual content of the damaged areas is different every time.
0002800 - 0002fff
0007800 - 0007fff
000c800 - 000cfff
0011800 - 0011fff
0016800 - 0016fff

The rest of the file is correct.
PAGESIZE is 4096

Comment by Olaf Faaland [ 15/Nov/18 ]

I mounted the file system from one of the OSS nodes (porter), so that the client is the same version (lustre-2.10.5_2.chaos-3.ch6.x86_64) as all the servers and the client communicates directly with the servers, not through routers.
On catalyst, the lustre 2.8 compute cluster, I created a file using dd and bs=10240 as described above.

When I read the file from the client mounted on the OSS, I see the corrupted data.

This seems to me to indicate that the problem is occurring in the write path, not the read path. Does that make sense?

Comment by Sarah Liu [ 15/Nov/18 ]

cannot reproduce it with tip of master (build 3826 el7.5 . kernel-3.10.0-862.14.4.el7_lustre.x86_64) server and 2.8.0 client
2 MDS with 1 MDT on each; 1 OSS with 2 OSTs, ldiskfs
1 client

[root@trevis-60vm4 lustre]# ./rp.sh 
+ dd if=/dev/urandom of=testfile.in bs=10240 count=2
2+0 records in
2+0 records out
20480 bytes (20 kB) copied, 0.00276562 s, 7.4 MB/s
+ dd if=testfile.in of=testfile.out bs=10240 count=2
2+0 records in
2+0 records out
20480 bytes (20 kB) copied, 0.00142726 s, 14.3 MB/s
++ md5sum testfile.out
+ original_md5sum='f6bcdb9f1b674d29cd313a46a1c0cedb  testfile.out'
+ echo 3
[ 1748.385888] rp.sh (21490): drop_caches: 3
++ md5sum testfile.out
+ echo after drop_caches f6bcdb9f1b674d29cd313a46a1c0cedb testfile.out f6bcdb9f1b674d29cd313a46a1c0cedb testfile.out
after drop_caches f6bcdb9f1b674d29cd313a46a1c0cedb testfile.out f6bcdb9f1b674d29cd313a46a1c0cedb testfile.out
[root@trevis-60vm4 lustre]# ls

 

Comment by Olaf Faaland [ 15/Nov/18 ]

Sarah,
If there's any information I can provide let me know. Thanks.

Comment by Andreas Dilger [ 16/Nov/18 ]

Olaf, as Sarah is having trouble to reproduce this, can you please run a test with -1 debug on the client? My first guess is that this is somehow related to the client IO stack. Given that there would only be a handful of operations in the log it shouldn't be too bad to look through.

Comment by Andreas Dilger [ 16/Nov/18 ]

I guess the other question is whether you tried running the reproducer on some previous version on the client? Is it possible that this is a newly introduced problem? It seems a bit strange that there would be a problem like this going unnoticed since 2.8 was released.

Comment by Olaf Faaland [ 16/Nov/18 ]

In testing since yesterday I'm sometimes finding the corruption does not occur - that is, if I run the same reproduce 60 times in a row on the same client, for example, it may show corruption 50 times in a row and then show no corruption for the last 10.

I attached for-upload-lu-11663.tar.bz2 which has -1 debug logs for 3 attempts, along with the terminal output when I ran the reproducer and an index matching the results to the log files.  I run lctl dk before each attempt and after, so there are 6 log files.

After the first attempt, which shows the corruption, I umount all the lustre file systems and then mount them again.  I then run the same reproducer twice and no corruption occurs.  I'm not sure whether that's due to the umount/remount or not.

Comment by Olaf Faaland [ 16/Nov/18 ]

I guess the other question is whether you tried running the reproducer on some previous version on the client? Is it possible that this is a newly introduced problem? It seems a bit strange that there would be a problem like this going unnoticed since 2.8 was released.

I agree.  I'll try that.

Comment by Olaf Faaland [ 19/Nov/18 ]

I'm still working on trying previous client versions. I should have at least one other version tested today.

For context, this issue has been observed on client cluster catalyst, which mounts three lustre file systems.

  • lustre3 hosted on porter. This is lustre 2.10.5 based.
  • lustre1 hosted on copper. This is lustre 2.8.2 based.
  • lscratchh hosted on zinc. This is lustre 2.8.2 based.

Connections are through routers. The routers in catalyst are the same version as the clients. All nodes are x86_64. I don't recall the IB-to-IP router nodes lustre or kernel versions but can find out.

catalyst-compute <> catalyst-router <> lustre3
catalyst-compute <> catalyst-router <> IB-to-IP-router <> IP-to-IB-router <> (lustre1 and lscratchh)

We have observed this issue only on lustre3 so far.

During testing this weekend I ran two 1000-iteration test sets on 20 dedicated catalyst nodes. During both sets:

  • one node, catalyst110, reproduced the problem > 95% of the time
  • a different node reproduced the problem about 15% of the time
  • fifteen nodes never reproduced the problem

In the first test set, I only ran the reproducer against lustre3, where the issue was first identified last week.
In the second test set, I ran the reproducer first against lustre3 and then against lustre1. The problem was reproduced only with lustre3, the 2.10 file system. It was never reproduced with lustre1.

Comment by Sarah Liu [ 20/Nov/18 ]

I downloaded lustre-2.10.5_2.chaos.tar.gz and lustre-2.8.2_1.chaos.tar.gz (cannot find 2.8.2_5) from https://github.com/LLNL/lustre/releases, compile them and cannot reproduce.

Server: compile lustre-2.10.5_2.chaos on kernel 3.10.0-862.14.4.el7_lustre.x86_64
1 MDT, 1 OST on single node

[root@trevis-60vm1 utils]# uname -a
Linux trevis-60vm1.trevis.whamcloud.com 3.10.0-862.14.4.el7_lustre.x86_64 #1 SMP Thu Nov 8 07:41:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@trevis-60vm1 utils]#
[root@trevis-60vm1 ~]# lu-11663/lustre-2.10.5_2.chaos/lustre/utils/lctl get_param version
version=2.10.5
[root@trevis-60vm1 ~]#

[root@trevis-60vm1 ~]# rpm -qa|grep zfs
libzfs2-0.7.9-1.el7.x86_64
kmod-lustre-osd-zfs-2.11.56_140_g2339e1b-1.el7.x86_64
libzfs2-devel-0.7.9-1.el7.x86_64
lustre-osd-zfs-mount-2.11.56_140_g2339e1b-1.el7.x86_64
zfs-0.7.9-1.el7.x86_64
...

Single client, compile lustre-2.8.2_1.chaos on kernel 3.10.0-327.3.1.el7.x86_64(2.8.0 kernel)

[root@trevis-60vm4 ~]# lu-11663/lustre-2.8.2_1.chaos/lustre/utils/lctl get_param version
version=lustre: 2.8.2
kernel: patchless_client
build:  2.8.2
[root@trevis-60vm4 ~]#
[root@trevis-60vm4 ~]# uname -a
Linux trevis-60vm4.trevis.whamcloud.com 3.10.0-327.3.1.el7.x86_64 #1 SMP Fri Nov 20 05:40:26 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@trevis-60vm4 ~]#

result

[root@trevis-60vm4 ~]# cd /mnt/lustre/
[root@trevis-60vm4 lustre]# sh foo.sh 
+ dd if=/dev/urandom of=testfile.in bs=10240 count=2
2+0 records in
2+0 records out
20480 bytes (20 kB) copied, 0.00236147 s, 8.7 MB/s
+ dd if=testfile.in of=testfile.out bs=10240 count=2
2+0 records in
2+0 records out
20480 bytes (20 kB) copied, 0.00111802 s, 18.3 MB/s
++ md5sum testfile.out
+ original_md5sum='20dd24fb015feb7de67bbdc12f2c16bf  testfile.out'
+ echo 3
++ md5sum testfile.out
+ echo after drop_caches 20dd24fb015feb7de67bbdc12f2c16bf testfile.out 20dd24fb015feb7de67bbdc12f2c16bf testfile.out
after drop_caches 20dd24fb015feb7de67bbdc12f2c16bf testfile.out 20dd24fb015feb7de67bbdc12f2c16bf testfile.out
[root@trevis-60vm4 lustre]#

Also tried client with branch: origin/2.8.2-llnl from fs/lustre-release-fe-llnl
top commit 8356dd88e2e59edd1462bb4647f61d5a210d4262
run reproducer 10 times, cannot reproduce.

Comment by Olaf Faaland [ 20/Nov/18 ]

Thanks, for trying those, Sarah. Can you suggest information to capture from my clients where the problem is reproducing? As I mentioned, even on my cluster during testing over the weekend, some nodes reproduced reliably, but many never did.

Comment by Olaf Faaland [ 21/Nov/18 ]

I've had issues getting other versions of the client to work due to changes in IB with map_on_demand, peer_credits, etc. in recent versions, but I think I'm past that.

Today I reproduced the issue with client version 2.8.2_2.chaos.  I'll try with earlier and later clients tomorrow.

Comment by Peter Jones [ 25/Nov/18 ]

Olaf

Is there some pattern around which nodes can hit this issue vs those that don't?

Peter

Comment by Olaf Faaland [ 25/Nov/18 ]

Is there some pattern around which nodes can hit this issue vs those that don't?

Not that I've been able to find.

Comment by Olaf Faaland [ 26/Nov/18 ]

Can you suggest information to capture from my clients where the problem is reproducing? As I mentioned, even on my cluster during testing over the weekend, some nodes reproduced reliably, but many never did.

Poke

Comment by Oleg Drokin [ 26/Nov/18 ]

I reviewed the -1 logs.

Interesting observations I have:
1. The before unmount, you had some strange grant shortage, what this means is every write was actually synchronous. You can track this by messages that say this:

00000008:00000020:37.0:1542388236.271202:0:6433:0:(osc_cache.c:1608:osc_enter_cache()) lustre3-OST002a-osc-ffff8a1c6ed62800: grant { dirty: 0/8192 dirty_pages: 0/16449536 dropped: 0 avail: 0, reserved: 0, flight: 0 }lru {in list: 0, left: 3, waiters: 0 }no grant space, fall back to sync i/o

2. so this leads to a sync write in a middle of a page, twice.
3. This is actually got both .in and .out file, but only .out file is somehow damaged, huh?
4. We know that we are writing the correct data to the server because we can observe both write requests, to .in and .out files and the checksum comes the same, see the "checksum at write origin" message repeated twice for the same request.
We cannot see if it's what was read, though, because the final read comes after readahead so all 4 pages are read in one go and the checksum is not comparable (interesting experiment would have been to disable readahead or do directio reads or some such to see if the bad data comes straight from the server, which I think it does, but we cannot be 100% sure).

Now looking at the successful iterations after remounts, there are two major differences there:
1. There's plenty of grant so no sync writes are happening.
2. The drop caches does nothing, there are NO write RPCs in those locks (grep for 'o4-' to confirm). There are no reads either (grep for 'checksum .* confirmed' you see only two requests with fffffff checksum, that's the empty read at EOF).

these two thigns combined mean that whatever corruption you had, even if it's happening, would not be seen.

Anyway my current conclusion is the corruption is actually happening on the server, it could be the disk or Lustre somewhere, I don't know about that, but the client seems to be doing everything ok.

As such I suspect we would need client+server logs of a reproduced case. Also please include both .in and .out files so we can compare them. It looks like to facilitate better reproducing you might want to dramatically shrink grant availability somehow (is the fs more prone to this is mostly full? quotas that are getting lowish in place?). I do wonder if the same thing happens when you use directio straight from dd, but since it's not page-aligned, that cannot happen and we have no easy way of triggering the sync io otherwise, huh.

I'll see if I can find a way to trigger sync io deliberately.

Comment by Olaf Faaland [ 27/Nov/18 ]

Oleg, I've uploaded lu-11663-2018-11-26.tgz which contains the test files and debug logs on both client and server during two tests; one iteration that reproduces the issue, on client catalyst101, and one where the corruption does not occur, on client catalyst106. There's a typescript file that shows the output of the test as it ran. In both cases the stripe index of the files is 0.

The node which fails the test takes much longer to write the data, consistent with the sync writes you saw in the last debug logs.

The file system where this is occurring is 28% full, with individual OSTs ranging from 25% full to 31% full.
The amount of data I personally have stored on each OST ranges from 23M to 308M; there are 80 OSTs. My total usage is 5.37G and total quota is 18T. lfs quota says total allocated block limit is 5T, and each OST reports a limit of 64G.

Comment by Andreas Dilger [ 27/Nov/18 ]

Olaf, I think Oleg was referring to the space grant, which can be seen on the OSS with "lctl get_param obdfilter.*.tot_granted" and the amount granted to the client with "lctl get_param osc.*.cur_grant_bytes" (probably only for the OST the file was striped over. Also useful would be "lctl get_param osc.*.max_dirty_mb".

Comment by Oleg Drokin [ 27/Nov/18 ]

Ok, I can reproduce this on master now too. There are two requirements: sync writes due to lack of grant/quota and ZFS. ldiskfs works fine.

In order to force the lack of quota codepath we can use the 0x411 failloc on the client like this: lctl set_param fail_loc=0x411

Then run the original inspired script in a lustre dir:

dd if=/dev/urandom of=testfile.in bs=10240 count=2 
dd if=testfile.in of=testfile.out bs=10240 count=2
original_md5sum=$(md5sum testfile.in)
echo 3 | sudo tee /proc/sys/vm/drop_caches ; sleep 2
md5sum=$(md5sum testfile.out)
echo after drop_caches $md5sum before $original_md5sum

Set this and you'll see the problem 100% of the time. What's interesting is doing oflags=sync to dd does not help as it still results in full page writes in RPC for partial page writes on VFS side.

It appears that the problem is either in ZFS or more likely, in osd-zfs, where when a partial page write happens, the previous content of the page is not read from disk and so we just update the partial content we got in the RPC, but overwrite whatever was supposed to be there in the part that we are not overwriting.

Comparing osd_write_prep, we can see it's a noop in osd_zfs, but in osd_zfs it actually prereads all partial pages. On the other hand osd_write in osd_zfs uses dmu_write(by_node) with offset so perhaps it's expected that zfs is expected to do this?

Either way at least it's clear what's going on now, hence this update.

Comment by Oleg Drokin [ 27/Nov/18 ]

Shortest reproducer:

lctl set_param fail_loc=0x411
dd if=/dev/urandom of=testfile.in bs=10240 count=2
md5sum testfile.in
lctl set_param ldlm.namespaces.*osc*.lru_size=clear
md5sum testfile.in
Comment by Peter Jones [ 27/Nov/18 ]

Alex

Can you please investigate?

Peter

Comment by Oleg Drokin [ 27/Nov/18 ]

btw, since we are concentrating this ticket on the data corruption, if you want to pursue why some nodes are stuck with no grant and do not appear to be getting any more grant until remount, you probably should open another ticket for this.

Comment by Patrick Farrell (Inactive) [ 27/Nov/18 ]

Olaf,

If a bug is opened for the grant issue, could you tag me on it?  Thx.

Comment by Peter Jones [ 29/Nov/18 ]

Strange. Alex's patch did not get an auto comment - https://review.whamcloud.com/#/c/33726/. As I understand it, this patch seems to be holding up well  against the reproducer but the test cases need some refinement. Are we now at the point when LLNL can use a b2_10 port of this patch on their affected filesystem? 

Comment by Gerrit Updater [ 29/Nov/18 ]

Oleg Drokin (green@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33748
Subject: LU-11663 osd-zfs: write partial pages with correct offset
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 6f9a0292eacb0d603b14cc03290a574cb7f0c846

Comment by Alex Zhuravlev [ 29/Nov/18 ]

there are two options here: 1) revert LU-10683 (but potentially get bad RPC checksum messages back) 2) apply https://review.whamcloud.com/#/c/33726/ patch which is still under testing.
both options have worked against our reproducer (see in option #2 patch) on b2_10
we are still investigating the root cause for LU-10683 (bad checksums)

Comment by Li Xi [ 30/Nov/18 ]

I feel between the two options that Alex pointed out, reverting the patch of LU-10683 is not a good one. The lnb_page_offset should be the same with the client side page offset in 'struct brw_page', shouldn't it? It doesn't feel right to move the data to the offset 0 of a page when the data has an offset in the page.

Comment by Alex Zhuravlev [ 30/Nov/18 ]

well, from filesystem point of view, there is no requirement to use same page offset. moreover, client and server may have different pagesize, which makes it impossible to match offset, right?

Comment by Li Xi [ 30/Nov/18 ]

As commented in LU-11697, the correct page offset in lnb_page_offset is the reason why LU-10683 oatch fixed the RPC checksum error. Both osc_checksum_bulk() and tgt_checksum_niobuf() assume the page offsets are properly inited and should be equal to each other.

Comment by Gerrit Updater [ 30/Nov/18 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33726/
Subject: LU-11663 osd-zfs: write partial pages with correct offset
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: c038909fbc2b3b14763dd731e9c877d11d338f04

Comment by Gerrit Updater [ 30/Nov/18 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33748/
Subject: LU-11663 osd-zfs: write partial pages with correct offset
Project: fs/lustre-release
Branch: b2_10
Current Patch Set:
Commit: 18d6b8fb2c359431a6da57b178ec0845925ea89c

Comment by Peter Jones [ 30/Nov/18 ]

Fix landed for 2.12 and 2.10.6. Checksum issues for master will be covered under LU-11697

Comment by Olaf Faaland [ 17/Dec/18 ]

Patrick, I opened a ticket re: grant going to 0, it is https://jira.whamcloud.com/browse/LU-11798

Generated at Sat Feb 10 02:45:51 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.