[LU-12503] LustreError: 19435:0:(vvp_io.c:1056:vvp_io_write_start()) LBUG Created: 02/Jul/19  Updated: 01/Jun/20  Resolved: 14/Dec/19

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.10.6, Lustre 2.12.2
Fix Version/s: Lustre 2.14.0, Lustre 2.12.4

Type: Bug Priority: Critical
Reporter: Saerda Halifu Assignee: Zhenyu Xu
Resolution: Fixed Votes: 0
Labels: None
Environment:

Server: PowerEdge R640 with 64 GB memory and Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz
OS: CentOS 7.5.1804
Lustre client: 2.12.2


Attachments: Text File vmcore-dmesg.txt    
Issue Links:
Related
is related to LU-11825 Remove LU-8964/pio feature & supporti... Resolved
Epic/Theme: NFS
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

We are running our lustre file system on 1 mds and 8 oss nodes. we are running lustre 2.10.6 on the lustre servers and clients.

On one of the clients, we are exporting lustre via NFS3 and smb, it has been working fine for more than a year, but recently the client which is exporting lustre as NFS and smb start to crash due to a lustre bug as following:

 

2014.148312] LustreError: 19435:0:(vvp_io.c:1056:vvp_io_write_start()) ASSERTION( vio->vui_iocb->ki_pos == pos ) failed: ki_pos 1209601876 [1209597952, 1210056704)
[ 2014.148338] LustreError: 19435:0:(vvp_io.c:1056:vvp_io_write_start()) LBUG
[ 2014.148352] Pid: 19435, comm: nfsd 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019
[ 2014.148353] Call Trace:
[ 2014.148376] [<ffffffffc0a0d7cc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
[ 2014.148389] [<ffffffffc0a0d87c>] lbug_with_loc+0x4c/0xa0 [libcfs]
[ 2014.148394] [<ffffffffc1061270>] vvp_io_write_start+0x790/0x820 [lustre]
[ 2014.148419] [<ffffffffc0cb5328>] cl_io_start+0x68/0x130 [obdclass]
[ 2014.148449] [<ffffffffc0cb74fc>] cl_io_loop+0xcc/0x1c0 [obdclass]
[ 2014.148462] [<ffffffffc101765b>] ll_file_io_generic+0x63b/0xcb0 [lustre]
[ 2014.148470] [<ffffffffc10182f2>] ll_file_aio_write+0x442/0x590 [lustre]
[ 2014.148476] [<ffffffff8d040e6b>] do_sync_readv_writev+0x7b/0xd0
[ 2014.148480] [<ffffffff8d042aae>] do_readv_writev+0xce/0x260
[ 2014.148482] [<ffffffff8d042cd5>] vfs_writev+0x35/0x60
[ 2014.148484] [<ffffffffc0699f90>] nfsd_vfs_write+0xc0/0x3a0 [nfsd]
[ 2014.148492] [<ffffffffc069c962>] nfsd_write+0x112/0x2a0 [nfsd]
[ 2014.148498] [<ffffffffc06a3070>] nfsd3_proc_write+0xc0/0x160 [nfsd]
[ 2014.148504] [<ffffffffc0694810>] nfsd_dispatch+0xe0/0x290 [nfsd]
[ 2014.148509] [<ffffffffc0610cf3>] svc_process_common+0x493/0x760 [sunrpc]
[ 2014.148523] [<ffffffffc06110c3>] svc_process+0x103/0x190 [sunrpc]
[ 2014.148531] [<ffffffffc069416f>] nfsd+0xdf/0x150 [nfsd]
[ 2014.148535] [<ffffffff8cec1da1>] kthread+0xd1/0xe0
[ 2014.148539] [<ffffffff8d575c1d>] ret_from_fork_nospec_begin+0x7/0x21
[ 2014.148543] [<ffffffffffffffff>] 0xffffffffffffffff
[ 2014.148551] Kernel panic - not syncing: LBUG
[ 2014.148561] CPU: 2 PID: 19435 Comm: nfsd Kdump: loaded Tainted: G OE ------------ 3.10.0-957.21.3.el7.x86_64 #1
[ 2014.148579] Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 1.4.8 05/21/2018
[ 2014.148592] Call Trace:
[ 2014.148603] [<ffffffff8d563107>] dump_stack+0x19/0x1b
[ 2014.148615] [<ffffffff8d55c810>] panic+0xe8/0x21f
[ 2014.148629] [<ffffffffc0a0d8cb>] lbug_with_loc+0x9b/0xa0 [libcfs]
[ 2014.148650] [<ffffffffc1061270>] vvp_io_write_start+0x790/0x820 [lustre]
[ 2014.148675] [<ffffffffc0cb3357>] ? cl_lock_request+0x67/0x1f0 [obdclass]
[ 2014.148699] [<ffffffffc0cb5328>] cl_io_start+0x68/0x130 [obdclass]
[ 2014.148722] [<ffffffffc0cb74fc>] cl_io_loop+0xcc/0x1c0 [obdclass]
[ 2014.148739] [<ffffffffc101765b>] ll_file_io_generic+0x63b/0xcb0 [lustre]
[ 2014.148753] [<ffffffff8ced3250>] ? check_preempt_curr+0x80/0xa0
[ 2014.148771] [<ffffffffc10182f2>] ll_file_aio_write+0x442/0x590 [lustre]
[ 2014.148784] [<ffffffff8d040e6b>] do_sync_readv_writev+0x7b/0xd0
[ 2014.148914] [<ffffffff8d042aae>] do_readv_writev+0xce/0x260
[ 2014.149049] [<ffffffffc1017eb0>] ? ll_file_splice_read+0x1e0/0x1e0 [lustre]
[ 2014.149185] [<ffffffffc1018440>] ? ll_file_aio_write+0x590/0x590 [lustre]
[ 2014.149318] [<ffffffff8d11e003>] ? ima_get_action+0x23/0x30
[ 2014.149447] [<ffffffff8d11d51e>] ? process_measurement+0x8e/0x250
[ 2014.149578] [<ffffffff8d03f087>] ? do_dentry_open+0x1e7/0x2e0
[ 2014.149708] [<ffffffff8d042cd5>] vfs_writev+0x35/0x60
[ 2014.149841] [<ffffffffc0699f90>] nfsd_vfs_write+0xc0/0x3a0 [nfsd]
[ 2014.149975] [<ffffffffc069c962>] nfsd_write+0x112/0x2a0 [nfsd]
[ 2014.150109] [<ffffffffc06a3070>] nfsd3_proc_write+0xc0/0x160 [nfsd]
[ 2014.150243] [<ffffffffc0694810>] nfsd_dispatch+0xe0/0x290 [nfsd]
[ 2014.150381] [<ffffffffc0610cf3>] svc_process_common+0x493/0x760 [sunrpc]
[ 2014.150489] LustreError: 19462:0:(vvp_io.c:1056:vvp_io_write_start()) ASSERTION( vio->vui_iocb->ki_pos == pos ) failed: ki_pos 1211699028 [1211695104, 1212153856)
[ 2014.150491] LustreError: 19462:0:(vvp_io.c:1056:vvp_io_write_start()) LBUG
[ 2014.150492] Pid: 19462, comm: nfsd 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019
[ 2014.150492] Call Trace:
[ 2014.150514] [<ffffffffc0a0d7cc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
[ 2014.150519] [<ffffffffc0a0d87c>] lbug_with_loc+0x4c/0xa0 [libcfs]
[ 2014.150533] [<ffffffffc1061270>] vvp_io_write_start+0x790/0x820 [lustre]
[ 2014.150551] [<ffffffffc0cb5328>] cl_io_start+0x68/0x130 [obdclass]
[ 2014.150564] [<ffffffffc0cb74fc>] cl_io_loop+0xcc/0x1c0 [obdclass]
[ 2014.150571] [<ffffffffc101765b>] ll_file_io_generic+0x63b/0xcb0 [lustre]
[ 2014.150577] [<ffffffffc10182f2>] ll_file_aio_write+0x442/0x590 [lustre]
[ 2014.150580] [<ffffffff8d040e6b>] do_sync_readv_writev+0x7b/0xd0
[ 2014.150581] [<ffffffff8d042aae>] do_readv_writev+0xce/0x260
[ 2014.150583] [<ffffffff8d042cd5>] vfs_writev+0x35/0x60
[ 2014.150589] [<ffffffffc0699f90>] nfsd_vfs_write+0xc0/0x3a0 [nfsd]
[ 2014.150594] [<ffffffffc069c962>] nfsd_write+0x112/0x2a0 [nfsd]
[ 2014.150599] [<ffffffffc06a3070>] nfsd3_proc_write+0xc0/0x160 [nfsd]
[ 2014.150603] [<ffffffffc0694810>] nfsd_dispatch+0xe0/0x290 [nfsd]
[ 2014.150613] [<ffffffffc0610cf3>] svc_process_common+0x493/0x760 [sunrpc]
[ 2014.150621] [<ffffffffc06110c3>] svc_process+0x103/0x190 [sunrpc]
[ 2014.150625] [<ffffffffc069416f>] nfsd+0xdf/0x150 [nfsd]
[ 2014.150627] [<ffffffff8cec1da1>] kthread+0xd1/0xe0
[ 2014.150630] [<ffffffff8d575c1d>] ret_from_fork_nospec_begin+0x7/0x21
[ 2014.150634] [<ffffffffffffffff>] 0xffffffffffffffff
[ 2014.152515] LustreError: 19480:0:(vvp_io.c:1056:vvp_io_write_start()) ASSERTION( vio->vui_iocb->ki_pos == pos ) failed: ki_pos 1213796180 [1213792256, 1214251008)
[ 2014.152517] LustreError: 19480:0:(vvp_io.c:1056:vvp_io_write_start()) LBUG
[ 2014.152518] Pid: 19480, comm: nfsd 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019
[ 2014.152519] Call Trace:
[ 2014.152542] [<ffffffffc0a0d7cc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
[ 2014.152548] [<ffffffffc0a0d87c>] lbug_with_loc+0x4c/0xa0 [libcfs]
[ 2014.152569] [<ffffffffc1061270>] vvp_io_write_start+0x790/0x820 [lustre]
[ 2014.152593] [<ffffffffc0cb5328>] cl_io_start+0x68/0x130 [obdclass]
[ 2014.152610] [<ffffffffc0cb74fc>] cl_io_loop+0xcc/0x1c0 [obdclass]
[ 2014.152620] [<ffffffffc101765b>] ll_file_io_generic+0x63b/0xcb0 [lustre]
[ 2014.152630] [<ffffffffc10182f2>] ll_file_aio_write+0x442/0x590 [lustre]
[ 2014.152632] [<ffffffff8d040e6b>] do_sync_readv_writev+0x7b/0xd0
[ 2014.152634] [<ffffffff8d042aae>] do_readv_writev+0xce/0x260
[ 2014.152635] [<ffffffff8d042cd5>] vfs_writev+0x35/0x60
[ 2014.152643] [<ffffffffc0699f90>] nfsd_vfs_write+0xc0/0x3a0 [nfsd]
[ 2014.152649] [<ffffffffc069c962>] nfsd_write+0x112/0x2a0 [nfsd]
[ 2014.152655] [<ffffffffc06a3070>] nfsd3_proc_write+0xc0/0x160 [nfsd]
[ 2014.152661] [<ffffffffc0694810>] nfsd_dispatch+0xe0/0x290 [nfsd]
[ 2014.152671] [<ffffffffc0610cf3>] svc_process_common+0x493/0x760 [sunrpc]
[ 2014.152679] [<ffffffffc06110c3>] svc_process+0x103/0x190 [sunrpc]
[ 2014.152685] [<ffffffffc069416f>] nfsd+0xdf/0x150 [nfsd]
[ 2014.152687] [<ffffffff8cec1da1>] kthread+0xd1/0xe0
[ 2014.152689] [<ffffffff8d575c1d>] ret_from_fork_nospec_begin+0x7/0x21
[ 2014.152693] [<ffffffffffffffff>] 0xffffffffffffffff
[ 2014.157437] [<ffffffffc06110c3>] svc_process+0x103/0x190 [sunrpc]
[ 2014.157572] [<ffffffffc069416f>] nfsd+0xdf/0x150 [nfsd]
[ 2014.157704] [<ffffffffc0694090>] ? nfsd_destroy+0x80/0x80 [nfsd]
[ 2014.157835] [<ffffffff8cec1da1>] kthread+0xd1/0xe0
[ 2014.157963] [<ffffffff8cec1cd0>] ? insert_kthread_work+0x40/0x40
[ 2014.158094] [<ffffffff8d575c1d>] ret_from_fork_nospec_begin+0x7/0x21
[ 2014.158224] [<ffffffff8cec1cd0>] ? insert_kthread_work+0x40/0x40
(END)

 

We have updated that client to lustre 2.12.2, but it did not help 



 Comments   
Comment by Saerda Halifu [ 05/Jul/19 ]

Dear Zhenyu Xu, 

 

Thanks for looking into this issue, will it be possible to get a patch for this bug before 2.13.0 release?

Is there any way to avoid it?  And I very much would like to know what is actually causing this bug?

 

Best Regards

 

Saerda  

Comment by Zhenyu Xu [ 12/Jul/19 ]

Observations so far:

thread 19435 has iocb->ki_pos = 0x4819,0F54 while its io->u.ci_rw.crw_pos is 0x4819,0000   write count is 64K(0x1,000)
thread 19462                    0x4839,0F54                                  0x4839,0000                  64K
thread 19480                    0x4859,0F54                                  0x4859,0000                  64K

I don't know how come the ki_pos is not updated by page alignment (it should be IMO).

The iocb's ki_pos could get updated during __generic_file_write_iter() in vvp_io_write_start(), while later got updated again with crw_pos. This code was changed in commit 2b0a34fe43bf4fce5560af61a45e5393c96070a9, before the commit, ll_file_io_generic() uses its own iocb and pos, and only update outside kiocb's ki_pos by the bytes that have been written after finished the IO loop. While I'm not sure whether that change affects it or not (i.e. the root cause hasn't been found), but the hunch is that using variables local to the function should avoid the complexity.

Comment by Peter Jones [ 14/Jul/19 ]

bobijam

This issue was initially seen on 2.10.6 which pre-dates the LU-11825 change. If halifu has an easy reproducer could a diagnostic patch that could help spotlight the root cause?

Peter

Comment by Gerrit Updater [ 15/Jul/19 ]

Bobi Jam (bobijam@hotmail.com) uploaded a new patch: https://review.whamcloud.com/35516
Subject: LU-12503 llite: debug file pos mimatch
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 53db2889ac057a493601e1fdec9677786e19fdf0

Comment by Zhenyu Xu [ 15/Jul/19 ]

H Saerda Halifu,

The debug patch (https://review.whamcloud.com/35516) is for client only, apply it to one client node and to see whether this issue hit again (hoping it's easy to reproduce) , if that's the case, collect the logs and upload here. Thank you.

Comment by Saerda Halifu [ 15/Jul/19 ]

Hi Zhenyu Xu,

 

Thanks, I will apply it, and will let you know.

Best Regards

 

Saerda

Comment by Peter Jones [ 22/Jul/19 ]

halifu any change in the frequency of the occurrence of the crash with the debug patch applied?

Comment by Peter Jones [ 31/Jul/19 ]

halifu any news?

Comment by Saerda Halifu [ 01/Aug/19 ]

Hi Peter, 

 

Sorry for the late answer, I was away for vacation.

I have downloaded lustre-2.12.1-1.src.rpm, and unpacked it, replaced vvp_io.c file with the new one from the patch.

I was able to create a new source rpm but I am not able to rebuild the Lustre-client rpm from this new source.

when I run :

rpmbuild --rebuild --without servers lustre-2.12.1-1.el7.src.rpm

I got the following error message:

Making all in .
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c: In function 'vvp_io_read_start':
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c:783:8: error: 'struct cl_io' has no member named 'ci_async_readahead'
if (io->ci_async_readahead) {
^
In file included from /root/rpmbuild/BUILD/lustre-2.12.1/libcfs/include/libcfs/libcfs.h:43:0,
from /root/rpmbuild/BUILD/lustre-2.12.1/lustre/include/lustre_lib.h:50,
from /root/rpmbuild/BUILD/lustre-2.12.1/lustre/include/obd.h:41,
from /root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c:41:
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c: In function 'vvp_io_write_start':
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c:1074:25: error: 'struct ll_sb_info' has no member named 'll_fsname'
ll_i2sbi(inode)->ll_fsname,
^
/root/rpmbuild/BUILD/lustre-2.12.1/libcfs/include/libcfs/libcfs_debug.h:158:55: note: in definition of macro '__CDEBUG'
libcfs_debug_msg(&msgdata, format, ## _VA_ARGS_); \
^
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c:1072:3: note: in expansion of macro 'CDEBUG'
CDEBUG(D_INODE,
^
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c: In function 'vvp_io_init':
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c:1516:26: error: 'struct ll_sb_info' has no member named 'll_fsname'
ll_i2sbi(inode)->ll_fsname,
^
/root/rpmbuild/BUILD/lustre-2.12.1/libcfs/include/libcfs/libcfs_debug.h:158:55: note: in definition of macro '__CDEBUG'
libcfs_debug_msg(&msgdata, format, ## _VA_ARGS_); \
^
/root/rpmbuild/BUILD/lustre-2.12.1/libcfs/include/libcfs/libcfs_debug.h:189:37: note: in expansion of macro 'CDEBUG_LIMIT'
#define CERROR(format, ...) CDEBUG_LIMIT(D_ERROR, format, ## _VA_ARGS_)
^
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c:1515:4: note: in expansion of macro 'CERROR'
CERROR("%s: refresh file layout " DFID " error %d.\n",
^
/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.c: At top level:
cc1: error: unrecognized command line option "-Wno-format-truncation" [-Werror]
cc1: all warnings being treated as errors
make[6]: *** [/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite/vvp_io.o] Error 1
make[6]: *** Waiting for unfinished jobs....
make[5]: *** [/root/rpmbuild/BUILD/lustre-2.12.1/lustre/llite] Error 2
make[5]: *** Waiting for unfinished jobs....
make[4]: *** [/root/rpmbuild/BUILD/lustre-2.12.1/lustre] Error 2
make[3]: *** [_module_/root/rpmbuild/BUILD/lustre-2.12.1] Error 2
make[2]: *** [modules] Error 2
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.MLq9qb (%build)

 

For me it looks like it fails before it comes to the changes in the code?

Am I doing something wrong ?

 

Best Regards

Saerda

Comment by Peter Jones [ 01/Aug/19 ]

I would have suggested that it would be simpler to just use the build products from the Jenkins build- https://build.whamcloud.com/job/lustre-reviews/67091/arch=x86_64,build_type=client,distro=el7.6,ib_stack=inkernel/artifact/artifacts/ - but I see that you are using RHEL 7.5 so perhaps building from the SRPMs in the same location would be easier. That or temporarily move just one client to a kernel version that allow you to use the build products (then reset it back to what you want to use after running the test)

Comment by Saerda Halifu [ 05/Aug/19 ]

Hi Peter, 

Thanks for the tips, I manage to get the right build and installed it. I have following rpms installed on this lustre client:

rpm -qa |grep lustre
kmod-lustre-client-2.12.56_85_g60392e6-1.el7.x86_64
lustre-client-2.12.56_85_g60392e6-1.el7.x86_64
kmod-lustre-client-tests-2.12.56_85_g60392e6-1.el7.x86_64

 

Now I started NFS service on this lustre client, I will see when/if this will crash, if it does, I will upload the dump.

 

Best Regards

Saerda

Comment by Peter Jones [ 11/Aug/19 ]

How frequently was the crash occurring before the debug patch was applied?

Comment by Gerrit Updater [ 11/Aug/19 ]

James Simmons (jsimmons@infradead.org) uploaded a new patch: https://review.whamcloud.com/35765
Subject: LU-12503 vvp_dev: increment *pos in .next
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: d51de92e086e88c97d1e5d671154188bd9b36ffb

Comment by James A Simmons [ 11/Aug/19 ]

Just wondering if this is a side effect of a bug found upstream that was fixed. Can you give it a try.

Comment by Saerda Halifu [ 12/Aug/19 ]

Hi,

My server manages to run without problem for the last 6 days after I have applied the debug patch.

Before that, the server used to crash quite often, some times a couple of hours after I start NFS export, some times after a day or so.

I also went through the logs, didn't see anything suspicious. 

 

I think this might have something to do with user activities, for example how users are reading and writing data to NFS exports.

 

Best Regards

 

Saerda

 

 

 

 

Comment by Peter Jones [ 12/Aug/19 ]

ok, well, considering that this issue has been seen on older versions too I think that we should drop the prriority from Blocker to Critical.

Comment by Saerda Halifu [ 15/Aug/19 ]

My server crashed today. I managed to upload vmcore-dmesg.txt file.

Let me know if you need more information.

 

Comment by Gerrit Updater [ 21/Aug/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/35765/
Subject: LU-12503 vvp_dev: increment *pos in .next
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 02336a9a5d096dc9a603ed0e77e0c7cf7b41ffb3

Comment by Peter Jones [ 21/Aug/19 ]

ok so James's patch has landed to master. Can we port it to b2_12 so that halifu can verify whether it is a fix? Or simmonsja does the latest crash info confirm that this is indeed the issue?

Comment by Patrick Farrell (Inactive) [ 21/Aug/19 ]

Ah, right.

James, I'm almost certain the code your patch touches is only called in the dump page cache path, which is strictly a special, extreme debug path, and there's basically no way it would be invoked here.  Have I missed something?  Otherwise it can't be the fix for this.  (It's still correct and useful, it's just not a fix for this.)

Comment by James A Simmons [ 21/Aug/19 ]

I agree with Patrick. It's a fix but not one to handle this problem. I was hoping it might address this issue

Comment by Gerrit Updater [ 02/Sep/19 ]

Bobi Jam (bobijam@hotmail.com) uploaded a new patch: https://review.whamcloud.com/36021
Subject: LU-12503 llite: debug file pos mimatch
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: bf11eb0f6c4ead18897b14d3ff2b8ef09a72f97a

Comment by Peter Jones [ 02/Sep/19 ]

bobijam

It looks like you have turned your original debug patch into a fix and now have added a new debug patch. Are you hoping for halifu to use both of these?

Peter

Comment by Zhenyu Xu [ 03/Sep/19 ]

yes, I hope the fix patch can handle the issue, and add the debug patch to catch info if that's not the right fix for this issue.

Comment by Gerrit Updater [ 14/Dec/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/36021/
Subject: LU-12503 llite: file write pos mimatch
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 1d2aa1513dc4e65813ad0bea138966a55244dbde

Comment by Peter Jones [ 14/Dec/19 ]

Landed for 2.14

Comment by Gerrit Updater [ 16/Dec/19 ]

Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/37034
Subject: LU-12503 llite: file write pos mimatch
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: 7e79fc11ed73b291b8e7a4805b3f1144d71ff83f

Comment by Gerrit Updater [ 16/Dec/19 ]

Minh Diep (mdiep@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/37035
Subject: LU-12503 vvp_dev: increment *pos in .next
Project: fs/lustre-release
Branch: b2_12
Current Patch Set: 1
Commit: cb2f58387f30271074dac0fcf021b5db157022e2

Comment by Gerrit Updater [ 20/Dec/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/37034/
Subject: LU-12503 llite: file write pos mimatch
Project: fs/lustre-release
Branch: b2_12
Current Patch Set:
Commit: 322cd140132e821c63b41b7da9ddb9f519b52194

Comment by Gerrit Updater [ 20/Dec/19 ]

Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/37035/
Subject: LU-12503 vvp_dev: increment *pos in .next
Project: fs/lustre-release
Branch: b2_12
Current Patch Set:
Commit: 589ba9b62c0e8b3a93145dd44bbbd92a26d6da8b

Comment by Gerrit Updater [ 14/Mar/20 ]

Bobi Jam (bobijam@hotmail.com) uploaded a new patch: https://review.whamcloud.com/37921
Subject: LU-12503 llite: file write pos mimatch
Project: fs/lustre-release
Branch: b2_10
Current Patch Set: 1
Commit: d5a087e31b1e1cf6812640d476faae8774ba0d66

Generated at Sat Feb 10 02:53:09 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.