[LU-11187] MMP updated sometimes failes T10PI checks Created: 27/Jul/18  Updated: 22/Jan/19  Resolved: 06/Oct/18

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.3
Fix Version/s: Lustre 2.12.0, Lustre 2.10.7

Type: Bug Priority: Critical
Reporter: Mahmoud Hanafi Assignee: Dongyang Li
Resolution: Fixed Votes: 0
Labels: None

Attachments: File dm20.hexdump     File trace.dat    
Issue Links:
Duplicate
is duplicated by LU-5481 mmp updates can some times fail T10PI... Resolved
Related
Severity: 2
Rank (Obsolete): 9223372036854775807

 Description   

We had seen this before. LU-5481. At time we just removed MMP from the OST, because we didn't use hos failover. But our new filesystem does use host failover. We are seeing the same error on a ISER+T10PI connect storage. This error can happen at mount time and random times during IO.

 [ 3520.840977] mlx5_3:mlx5_poll_one:657:(pid 0): CQN: 0xc05 Got SIGERR on key: 0x80007b0b err_type 0 err_offset 207 expected 9b3c actual a13c
[ 3520.878451] PI error found type 0 at sector 1337928 expected 953c vs actual 9b3c
[ 3520.900800] PI error found type 0 at sector 1337928 expected 9b3c vs actual a13c
[ 3520.923968] blk_update_request: I/O error, dev sdai, sector 20150568
[ 3520.943377] blk_update_request: I/O error, dev sdae, sector 20150568
[ 3520.963067] blk_update_request: I/O error, dev dm-15, sector 20150568
[ 3520.982436] Buffer I/O error on dev dm-15, logical block 2518821, lost async page write
[ 3521.006511] Buffer I/O error on dev dm-15, logical block 2518822, lost async page write
[ 3521.006558] blk_update_request: I/O error, dev dm-13, sector 20150568
[ 3521.006559] Buffer I/O error on dev dm-13, logical block 2518821, lost async page write
[ 3521.006563] Buffer I/O error on dev dm-13, logical block 2518822, lost async 

device /dev/dm-15 mounted by lustre
Filesystem volume name:   nbp10-OST001d
Last mounted on:          /
Filesystem UUID:          08b337bb-b3b1-48b0-925b-0bf5d3ba7253
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr dir_index filetype needs_recovery extent 64bit mmp flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize quota
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              9337344
Block count:              19122880512
Reserved block count:     0
Free blocks:              19120188065
Free inodes:              9337011
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16
Inode blocks per group:   2
Flex block group size:    64
Filesystem created:       Fri Jul 27 10:21:56 2018
Last mount time:          Fri Jul 27 10:44:14 2018
Last write time:          Fri Jul 27 10:44:15 2018
Mount count:              4
Maximum mount count:      -1
Last checked:             Fri Jul 27 10:21:56 2018
Check interval:           0 (<none>)
Lifetime writes:          7774 kB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               512
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      2ebd542d-9757-456f-b597-43fae5c542c0
Journal backup:           inode blocks
MMP block number:         2518821
MMP update interval:      5
User quota inode:         3
Group quota inode:        4

Note block with the error is the MMP block.



 Comments   
Comment by Andreas Dilger [ 27/Jul/18 ]

This is likely an artifact of how MMP writes are being submitted to the block device.

The write_mmp_block() IO submission likely needs to be modified to use the right interface if T10PI is enabled.

If that is already done properly, there is some chance that on a heavily-loaded system the multiple writes to the MMP block are racing with the integrity calculation, and this is resulting in the buffer being modified during/after the checksum but before the data is written to disk. Blocking or skipping next write if the previous IO is incomplete is an option in this case, but may result in a slower update of the MMP block.

A possibly better option would be to use the extend-integrity-processing-fn-rhel7.patch from LU-10472 and then submit the MMP block with pre-computed PI checksum, to avoid the race in computing this.

Comment by Peter Jones [ 30/Jul/18 ]

Dongyang

Do you have any advice to offer here?

Peter

Comment by Dongyang Li [ 01/Aug/18 ]

I'm not sure if extend-integrity-processing-fn-rhel7.patch from LU-10472 could help here.

The patch allows us to override the integrity generate/verify functions for a given bio, write_mmp_block() is just using buffer_head and submit_bh(). I'm also confused that even we can pass the pre-computed PI with MMP block, if the buffer gets modified again before I/O is done, we could still end up with mismatching data and PI right?

Comment by Li Xi [ 01/Aug/18 ]

I hit the same problem when using ISER + T10PI disk. And the environment was never stable. A lot of I/O errors happened when I was doing very small I/Os. I think I saw almost the same error messages on that environment too. Thus, I believe there might be some bugs on the driver level. And I agree with Dongyang. Most likely the patch of LU-10472 won't be so useful for this problem.

Comment by Andreas Dilger [ 09/Aug/18 ]

Looking at the write_mmp_block() code more closely, I see that it is not doing a "fire and forget" as I thought it was doing (which might result in two MMP block updates modifying the same buffer and causing PI errors). The kmmpd thread submits the writes for synchronous writes, and is waiting for the IO to complete before it is submitting a new MMP block update. This means that there should only ever be a single MMP write in flight at a time from the only thread that should be updating this buffer.

If there are problems with only the MMP block, this implies that either the block layer is returning from a sync write before the buffer is actually persistent on disk, or that the block device is potentially reordering the blocks in an internal queue?

What kernel version is in use here? One option you could try is adding REQ_FUA into the flags passed to submit_bh() to force the buffer through the block device cache. However, this may impact storage device performance.

Comment by Mahmoud Hanafi [ 09/Aug/18 ]

 We are running

3.10.0-693.21.1.el7.20180508.x86_64.lustre2103

We have Only seen the issue with the MMP block. What could be changing the buffers before it is sync to disk.

 

Comment by Andreas Dilger [ 10/Aug/18 ]

One option would be to compute the checksum of the MMP block before submission, and then again afterward. That would tell us if the block is actually being modified, and if it changed the before and after mmp block can be printed out to the console.

Comment by Andreas Dilger [ 17/Aug/18 ]

I think there are two possible approaches to debugging this - either save a checksum of the MMP block before/after the write and use that to compare whether the block is modified, or just save a copy of the whole MMP structure in memory for later comparison once the write completes. With newer kernels having support for EXT4_FEATURE_RO_COMPAT_METADATA_CSUM it would be straight forward to use the MMP checksum (this was added in the 3.4 kernel), but not quite as useful for debugging since we wouldn't know more than just "the MMP block was modified".

I think a better approach would be to save the whole MMP structure before each write, and then do a memcmp() after the timeout to see if the block has been modified. Use dump_mmp_msg() to print out the saved and current MMP block contents for comparison (e.g. is it random garbage, was it updated by another node or e2fsck, is it a stale copy of the data, etc).

While looking at this code, I thought it might be possible that the mmp_check_interval at the end of the loop was possibly modifying the buffer while it is being written, but write_mmp_block() should not be returning before the write is complete (it is using REQ_SYNC and b_end_io = end_buffer_write_sync()). Also, there is no mention of the "Error writing to MMP block" message in the logs, so it doesn't seem like there is a problem with the IO submission itself or we would be seeing an error up at the filesystem level.

Comment by Gerrit Updater [ 21/Aug/18 ]

Li Dongyang (dongyangli@ddn.com) uploaded a new patch: https://review.whamcloud.com/33038
Subject: LU-11187 ldiskfs: add debug patches to show mmp block contents
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: daef03d32567d61fab0db16d254412a6d818c1f1

Comment by Dongyang Li [ 21/Aug/18 ]

I've added the debug patch as Andreas suggested, I also noticed that in the log we have

[ 3520.982436] Buffer I/O error on dev dm-15, logical block 2518821, lost async page write

The "lost async page write" message is from end_buffer_async_write(), but in write_mmp_block(),

we use end_buffer_write_sync() as the completion handler, which would have a different message.

Also there's no "Error writing to MMP block" in the log. I suspect the write to the mmp block is not from the mmp code path, I've added a dump_stack() just above where the async message would appear hopefully we can find out who was writing to the mmp block.

Comment by Mahmoud Hanafi [ 25/Aug/18 ]

Here is console output for PI error


[ 4105.807355] LDISKFS-fs warning (device dm-4): kmmpd:208: MMP block mismatch.
[ 4105.828551] LDISKFS-fs warning (device dm-4): kmmpd:208: MMP failure info: last update time: 1535161818, last update node: nbp16-srv1, last update device: dm-4
[ 4105.828551] magic: 4d4d50, seq: 215, check_interval: 10, checksum: 0
[ 4105.828551] 
[ 4105.894981] LDISKFS-fs warning (device dm-4): kmmpd:209: copy of MMP block.
[ 4105.915920] LDISKFS-fs warning (device dm-4): kmmpd:209: MMP failure info: last update time: 1535161818, last update node: nbp16-srv1, last update device: dm-4
[ 4105.915920] magic: 4d4d50, seq: 215, check_interval: 10, checksum: 0
[ 4105.915920] 
[ 4106.201033] mlx5_3:mlx5_poll_one:671:(pid 0): CQN: 0xc05 Got SIGERR on key: 0x800068a4 err_type 0 err_offset 207 expected 2480 actual 2a80
[ 4106.240867] PI error found type 0 at sector 1337928 expected 2480 vs actual 2a80
[ 4106.263174] blk_update_request: I/O error, dev sdl, sector 20150568
[ 4106.282557] blk_update_request: I/O error, dev dm-4, sector 20150568
[ 4106.301660] CPU: 19 PID: 141 Comm: ksoftirqd/19 Tainted: G           OE  ------------   3.10.0-693.21.1.el7.20180508.x86_64.lustre211 #1
[ 4106.338539] Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 02/15/2018
[ 4106.348118] LDISKFS-fs warning (device dm-7): kmmpd:208: MMP block mismatch.
[ 4106.348120] LDISKFS-fs warning (device dm-7): kmmpd:208: MMP failure info: last update time: 1535161818, last update node: nbp16-srv1, last update device: dm-7
[ 4106.348120] magic: 4d4d50, seq: 21c, check_interval: 10, checksum: 0
[ 4106.348120] 
[ 4106.348121] LDISKFS-fs warning (device dm-7): kmmpd:209: copy of MMP block.
[ 4106.348122] LDISKFS-fs warning (device dm-7): kmmpd:209: MMP failure info: last update time: 1535161818, last update node: nbp16-srv1, last update device: dm-7
[ 4106.348122] magic: 4d4d50, seq: 21c, check_interval: 10, checksum: 0
[ 4106.348122] 
[ 4106.472717] Call Trace:
[ 4106.472723]  [<ffffffff8168f4b8>] dump_stack+0x19/0x1b
[ 4106.472727]  [<ffffffff81235550>] end_buffer_async_write+0xf0/0x120
[ 4106.472728]  [<ffffffff81233dcf>] end_bio_bh_io_sync+0x2f/0x60
[ 4106.472730]  [<ffffffff8123aec7>] bio_endio+0x67/0xb0
[ 4106.472731]  [<ffffffff8123913d>] ? bio_advance+0x1d/0xd0
[ 4106.472734]  [<ffffffff812f2ce0>] blk_update_request+0x90/0x370
[ 4106.472735]  [<ffffffff812f2fdc>] blk_update_bidi_request+0x1c/0x80
[ 4106.472736]  [<ffffffff812f32ef>] blk_end_bidi_request+0x1f/0x60
[ 4106.472737]  [<ffffffff812f340f>] blk_end_request_all+0x1f/0x30
[ 4106.472754]  [<ffffffffa03b5755>] dm_softirq_done+0x255/0x2d0 [dm_mod]
[ 4106.472756]  [<ffffffff812fa0d6>] blk_done_softirq+0x96/0xc0
[ 4106.472759]  [<ffffffff81091135>] __do_softirq+0xf5/0x280
[ 4106.472761]  [<ffffffff810912f8>] run_ksoftirqd+0x38/0x50
[ 4106.472763]  [<ffffffff810b9854>] smpboot_thread_fn+0x144/0x1a0
[ 4106.472764]  [<ffffffff810b9710>] ? lg_double_unlock+0x40/0x40
[ 4106.472766]  [<ffffffff810b1131>] kthread+0xd1/0xe0
[ 4106.472767]  [<ffffffff810b1060>] ? insert_kthread_work+0x40/0x40
[ 4106.472769]  [<ffffffff816a14dd>] ret_from_fork+0x5d/0xb0
[ 4106.472770]  [<ffffffff810b1060>] ? insert_kthread_work+0x40/0x40
[ 4106.472772] Buffer I/O error on dev dm-4, logical block 2518821, lost async page write
[ 4106.801045] LDISKFS-fs warning (device dm-11): kmmpd:208: MMP block mismatch.
[ 4106.801047] LDISKFS-fs warning (device dm-11): kmmpd:208: MMP failure info: last update time: 1535161819, last update node: nbp16-srv1, last update device: dm-11
[ 4106.801047] magic: 4d4d50, seq: 222, check_interval: 10, checksum: 0
[ 4106.801047] 
[ 4106.801047] LDISKFS-fs warning (device dm-11): kmmpd:209: copy of MMP block.
[ 4106.801049] LDISKFS-fs warning (device dm-11): kmmpd:209: MMP failure info: last update time: 1535161819, last update node: nbp16-srv1, last update device: dm-11
[ 4106.801049] magic: 4d4d50, seq: 222, check_interval: 10, checksum: 0

 

Comment by Andreas Dilger [ 25/Aug/18 ]

The debug information shows that it is not the MMP block data itself that is changing, but something else in the 1KB buffer that is not part of the MMP structure? The MMP memcmp() that was added is detecting the difference in the buffer at least, so we do have confirmation that the buffer is actually being changed, and it isn't just a transient issue in the block layer. One option to debug further is to dump the rest of the buffer as hex values before and after to see which byte(s) are modified.

Comment by Dongyang Li [ 28/Aug/18 ]

I've updated the debug patch to print the hexdump, Mahmoud could you give that a try?

Looks like you are using mq as the scheduler? It's just a stab in the dark but can you try it with mq disabled?

Comment by Mahmoud Hanafi [ 29/Aug/18 ]

dm20.hexdump

 

Attaching hexdump for one of the devices.

 I was running deadline I changed it to noop no difference in the MMP failures.

Comment by Dongyang Li [ 30/Aug/18 ]

ok I've made a stupid mistake in the debug patch, Mahmoud can you rerun it using the latest patchset 3? My apologies.

Comment by Mahmoud Hanafi [ 31/Aug/18 ]

I ran with the latest patch set 3.  Reproduced the PI error but there was no output from the debug patch.

[  988.391944] mlx5_3:mlx5_poll_one:671:(pid 0): CQN: 0xc05 Got SIGERR on key: 0x80009d6b err_type 0 err_offset 207 expected 24f1 actual 2af1
[  988.430812] PI error found type 0 at sector 1337928 expected 24f1 vs actual 2af1
[  988.453087] blk_update_request: I/O error, dev sdam, sector 20150568
[  988.472382] blk_update_request: I/O error, dev dm-18, sector 20150568
[  988.491746] Buffer I/O error on dev dm-18, logical block 2518821, lost async page write
[  996.218102] mlx5_2:mlx5_poll_one:671:(pid 0): CQN: 0x405 Got SIGERR on key: 0x80003110 err_type 0 err_offset 207 expected 9377 actual 9977
[  996.260232] PI error found type 0 at sector 1337928 expected 9377 vs actual 9977
[  996.282509] blk_update_request: I/O error, dev sdu, sector 20150568
[  996.301422] blk_update_request: I/O error, dev dm-9, sector 20150568
[  996.320524] Buffer I/O error on dev dm-9, logical block 2518821, lost async page write
Comment by Andreas Dilger [ 31/Aug/18 ]

Mahmoud, that indicates that the T10-PI layer detected some kind of corruption, but the block was not modified in memory. This would indicate corruption at some lower layer, though it is confusing why only the MMP block is affected.

One option would be to add a similar hex dump at the point where the error is being reported after the checksum failure, to see if the data is different somehow? It also makes sense to print out the buffer address, to see if there is a copy of the page being used or something, possibly being caused by the device mapper layer.

Are you using software RAID or similar on this system, or is DM only in use for multipath? Have you tried to disable multipath to see if the problems go away? The only other thing I can think of is to remove the REQ_SYNC | REQ_META | REQ_PRIO flags from submit_bh() one at a time to see if this makes a difference, as it might indicate where the problem is located. Removing the REQ_SYNC flag may cause legitimate failures if the system is very busy, since the MMP thread would only be waiting on the 5s MMP update interval and not the actual write completion.

At this point the problem looks to be outside of the scope of Lustre/ext4 so I'm not sure what else we can do.

Comment by Andreas Dilger [ 31/Aug/18 ]

One thing that is puzzling is the error message "lost async page write", since the REQ_SYNC flag should be forcing the write to be synchronous? I wonder if this is an artifact of the DM Multipath code submitting sync writes asynchronously, so that it isn't blocked waiting for completion if one of the paths fails? That would lend more weight to trying to reproduce this problem without the DM Multipath driver involved. If the problem goes away, you can contact Red Hat about this issue, since MMP and ext4 exist in the upstream kernel and we do not modify MMP in recent releases so it should be reproducible without Lustre (given a sufficiently similar IO workload).

Comment by Mahmoud Hanafi [ 31/Aug/18 ]

We are only using DM. I Will test with-out DM.

 

If the write fails as it is logged. Why doesn't the MMP update log an error. 

Comment by Mahmoud Hanafi [ 01/Sep/18 ]

I ran with OSTs mounted directly to slave device. Disabled multipath and flushed all the paths.. It still got an error.

 [ 7087.139668] mlx5_2:mlx5_poll_one:671:(pid 0): CQN: 0x405 Got SIGERR on key: 0x8000c273 err_type 0 err_offset 207 expected 948f actual 9a8f
[ 7087.200807] PI error found type 0 at sector 1337928 expected 948f vs actual 9a8f
[ 7087.223066] blk_update_request: I/O error, dev sdac, sector 20150568
[ 7087.242167] Buffer I/O error on dev sdac, logical block 2518821, lost async page write

 
Comment by Andreas Dilger [ 01/Sep/18 ]

It is also confusing to me why there is no "Error writing to MMP block" message being printed in this case, since the write error should be propagated up to the caller with REQ_SYNC. It makes me start to wonder if this block write is being generated somewhere else in the code, and only the MMP code is overwriting the same block in place?

As mentioned previously, it might help to hexdump the MMP block contents in the low-level code, and print out the address of the buffer being written, so that we can see if it is the same page as was submitted by write_mmp_block() or some other copy.

Comment by Dongyang Li [ 04/Sep/18 ]

Maybe we can figure out who submitted the write to the block, by using trace-cmd with something like this:

trace-cmd record -e block_bio_queue -f sector==20150568 -T

and then try to reproduce the I/O error, without multipath to avoid generating too much messages.

then the result can be viewed by trace-cmd report

 

Comment by Mahmoud Hanafi [ 04/Sep/18 ]

I was able to reproduce this issues on the lustre kernel with ext4 and using vanilla cento7 (3.10.0-862.11.6.el7.x86_64) using ext4.
I also was able to capture the error using trace-cmd.

This was on the 3.10.0-862.11.6.el7.x86_64 kernel.

Error reported at the console

[ 5246.256578] mlx5_2:mlx5_poll_one:671:(pid 0): CQN: 0x405 Got SIGERR on key: 0x800015f5 err_type 0 err_offset 207 expected 12a8 actual 18a8
[ 5246.294172] PI error found type 0 at sector 12528 expected 12a8 vs actual 18a8
[ 5246.325235] blk_update_request: I/O error, dev sdq, sector 75048
[ 5246.359541] blk_update_request: I/O error, dev dm-14, sector 75048
[ 5246.378121] Buffer I/O error on dev dm-14, logical block 9381, lost async page write

DM-14 is 253,14 and from the trace-cmd report we have

=> ret_from_fork_nospec_begin (ffffffff8b7255dd)
  kworker/u72:10-11060 [001]  5246.065529: block_bio_queue:      253,14 W 75048 + 8 [kworker/u72:10]
  kworker/u72:10-11060 [001]  5246.065533: kernel_stack:         <stack trace>
=> _submit_bh (ffffffff8b2573d7)
=> __block_write_full_page (ffffffff8b257652)
=> block_write_full_page (ffffffff8b257a1e)
=> blkdev_writepage (ffffffff8b25d828)
=> __writepage (ffffffff8b1a1c49)
=> write_cache_pages (ffffffff8b1a2744)
=> generic_writepages (ffffffff8b1a2a1d)
=> blkdev_writepages (ffffffff8b25d7e5)
=> do_writepages (ffffffff8b1a3ac1)
=> __writeback_single_inode (ffffffff8b24cf00)
=> writeback_sb_inodes (ffffffff8b24d994)
=> __writeback_inodes_wb (ffffffff8b24dcff)
=> wb_writeback (ffffffff8b24e533)
=> bdi_writeback_workfn (ffffffff8b24eebb)
=> process_one_work (ffffffff8b0b613f)
=> worker_thread (ffffffff8b0b71d6)
=> kthread (ffffffff8b0bdf21)

=> ret_from_fork_nospec_begin (ffffffff8b7255dd)
     kmmpd-dm-24-6248  [007]  5246.324427: block_bio_queue:      253,24 WSM 75048 + 8 [kmmpd-dm-24]
     kmmpd-dm-24-6248  [007]  5246.324443: kernel_stack:         <stack trace>
=> _submit_bh (ffffffff8b2573d7)
=> submit_bh (ffffffff8b257420)
=> write_mmp_block (ffffffffc058fdb1)
=> kmmpd (ffffffffc0590028)
=> kthread (ffffffff8b0bdf21)

=> ret_from_fork_nospec_begin (ffffffff8b7255dd)
     kmmpd-dm-14-6186  [007]  5246.401415: block_bio_queue:      253,14 WSM 75048 + 8 [kmmpd-dm-14]
     kmmpd-dm-14-6186  [007]  5246.401419: kernel_stack:         <stack trace>
=> _submit_bh (ffffffff8b2573d7)
=> submit_bh (ffffffff8b257420)
=> write_mmp_block (ffffffffc058fdb1)
=> kmmpd (ffffffffc0590028)
=> kthread (ffffffff8b0bdf21)

I am also attaching the trace.dat file.

trace.dat

Comment by Dongyang Li [ 06/Sep/18 ]

This actually shows the blkdev writeback kicked in and wrote the mmp block,

which explains where did the "lost async page write" messages come from.

I think the mmp thread should have total control of the mmp block, we don't want the writeback to kick in and mess with us, also I guess the checksum error is because when the block was submitted by writeback and under I/O, the mmp thread started to modify the contents of mmp block, stepping on the toes of the writeback.

I've updated https://review.whamcloud.com/#/c/33038/  patchset 6, can you give it a try?

The patch is simple you can also apply it to ext4 vanilla centos7 and test it from there.

Comment by Mahmoud Hanafi [ 06/Sep/18 ]

The patch work in vanilla centos7 and ext4. I will test ldiskfs next.

Comment by Jay Lan (Inactive) [ 07/Sep/18 ]

Mahmoud said that patch worked in our environment.
Before I cherry-pick the patch, are you sure you still want to name the patch "ldiskfs: add mmp debug patch"? The patchset 6 is no longer a debug patch.

Comment by Dongyang Li [ 07/Sep/18 ]

so it worked for ldiskfs as well?

Then I need to refresh the patch, we need to apply it to every supported distro, and I will push it to upstream.

Comment by Mahmoud Hanafi [ 07/Sep/18 ]

This does fix the issue in ldiskfs. We can move forward with the patch.

Comment by Gerrit Updater [ 05/Oct/18 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33038/
Subject: LU-11187 ldiskfs: don't mark mmp buffer head dirty
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: dd02d32c978ad95c9e2a3703ad6be7511c257a4d

Comment by Peter Jones [ 06/Oct/18 ]

Landed for 2.12

Comment by Gerrit Updater [ 10/Oct/18 ]

Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/33336
Subject: LU-11187 ldiskfs: don't mark mmp buffer head dirty
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: d11dd446facea523803d4767b69c799286ef01f4

Comment by Gerrit Updater [ 16/Jan/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/33336/
Subject: LU-11187 ldiskfs: don't mark mmp buffer head dirty
Project: fs/lustre-release
Branch: b2_10
Current Patch Set:
Commit: d63cd9f9795848c03c5882b76e971dfcd00433e6

Comment by Gerrit Updater [ 18/Jan/19 ]

Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/34063
Subject: LU-11187 ldiskfs: update rhel7.6 series
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: 740c8b5b3b0c7419a53d84fd4d19ecffbbfd28f3

Comment by Gerrit Updater [ 22/Jan/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/34063/
Subject: LU-11187 ldiskfs: update rhel7.6 series
Project: fs/lustre-release
Branch: b2_10
Current Patch Set:
Commit: b5ad8a06a6b092e38800987debdba5b3e1ee8b29

Generated at Sat Feb 10 02:41:42 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.