[LU-1320] EIO on read shortly after file written Created: 12/Apr/12  Updated: 02/May/12  Resolved: 30/Apr/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.1.0
Fix Version/s: Lustre 2.3.0, Lustre 2.1.2

Type: Bug Priority: Major
Reporter: Christopher Morrone Assignee: Jinshan Xiong (Inactive)
Resolution: Fixed Votes: 0
Labels: paj
Environment:

Lustre 2.1.0-24chaos on both clients and servers. http://github.com/chaos/lustre


Severity: 3
Rank (Obsolete): 4643

 Description   

Some of our most important users are seeing read() return EIO quite frequently, which completely ruins their job run.

The application uses an IO library to write a file to lustre. After writing, it closes the file. It then immediately reopens the file, reads the contents again and calculates a checksum to verify that the data is correct.

During the read phase, it will more-or-less randomly get an EIO on read and abort the entire job.

Both the write and read are performed on the same client, by the same thread. There are usually 16 threads, all writing and reading their own files.

There are no console messages on the client that give any clues to where the problem might be in lustre. There do not appear to be any evictions that correlate with the read error. A second read of the file will succeed and the checksum is correct, so this is a transient problem.

I am diving into the CLIO code, but it is all new to me so I could use some tips for where to start my debugging. Perhaps I should start with enabling vfstrace and rpctrace, and adding code to dump the lustre log when vvp_io_read_page() returns EIO...

Although this is only reproducible on the secure network, so code changes are going to be difficult to implement.



 Comments   
Comment by Jinshan Xiong (Inactive) [ 13/Apr/12 ]

Are they using direct IO? From what you described, it sounds like the previous async write ran into problem so AS_EIO was set and seen by the following read.

I think you can get more information by enabling VFSTRACE, but if possible, please check if write returned error. If you can update the IO library, a fsync() before closing the file will help you discover this error easier.

For the code path of read, here is a summary:

ll_file_read -> cl_io_loop(CIT_READ) -> vvp_io_read_start -> generic_file_aio_read(in kernel) -> ll_readpage -> vvp_io_read_page -> ll_readahead -> ... -> cl_io_submit_rw -> osc_io_submit -> .. network transfer .. -> wait for the page to become UPTODATE -> .. return ..

brw_interpret -> osc_completion -> osc_page_completion_read -> vvp_page_completion_read -> unlock_page and wake up the reading process

Comment by Christopher Morrone [ 13/Apr/12 ]

No, they are not using direct IO.

I do not believe that there were any real errors in the write stage, because after hitting an error on read, if the file is closed, and then opened and read for checksumming again the checksums pass just fine. So the data does seem to be safe in lustre.

At the moment it appears the there is just a transient read failure.

Comment by Christopher Morrone [ 13/Apr/12 ]

And thanks for the code path summary!

Comment by Jinshan Xiong (Inactive) [ 13/Apr/12 ]

I see. Can you please also enable D_PAGE so that we can see read page state?

Comment by Christopher Morrone [ 13/Apr/12 ]

Sure. I need to figure out a logging scheme to capture the times we are interested in...maybe I'll use the logging server with a large file size, and hope that I'm alerted to a failure before the logs rotate...

Comment by Christopher Morrone [ 13/Apr/12 ]

My description might have been off for the app's behavior. It is pretty complex and multi-layer so I can't say for certain, but the best guess of the person looking into the app and library is that is does something like:

1) open
2) perform many large writes (the library does caching, so hopefully 1MB at a time)
3) seek to an "index" location in the file, we think at the end, to record file book keeping information
4) seek to beginning of the file
5) read repeatedly in 1k sizes to retrieve and checksum the data
6) close file

So there is probably NOT a close() call between the write and read phases, and we are not yet aware of an fsync.

This could all be a little inaccurate, but its the best description that I have so far.

Comment by Christopher Morrone [ 16/Apr/12 ]

I think I have a better idea of the user's IO pattern thanks to the vfstrace debug level. It goes a little like this.

The file is writen initially like so:

create file
write 0 - 1000000
write 1000000 - 2000000
(repeat writing sequentially by 1000000 bytes at a time)
write 20000000 - 21975153
fsync
seek to 21975153
write 21975153 - 22171493
fsync

Now I think the file is read back and checksummed, and then the end of the file rewritten like the following (note that the file is never closed):

seek to 22171238 (this is 255 bytes before the EOF)
read 22171238 - 23171238 (yes, they ask for 1000000 bytes, even though only 255 remain...but thats not necessarily a problem)
read 22171493 - 23171238 (they got a partial read, so read again and presumably got EOF)
seek to 0
read 0 - 1000000
seek 1000000
read 1000000 - 2000000
(repeat this seek/read 1000000 bytes sequentially)
seek 22000000
read 22000000 - 23000000
read 22171493 - 23000000 (get EOF)
seek to 22171493
seek to 22000000
write 22000000 - 22171564 (note that file grows by 71 bytes here)
fsync

Now they read back the file AGAIN to verify the checksum (no this isn't the best way to do it, but I can't change the app). They start at the end of the file where some book keeping is stored (and presumably the checksum)

seek to 22171309 (-225 bytes from EOF)
read 22171309 - 23171309
read 22171564 - 23171309 (get EOF)
seek 22171564 (go to EOF)
seek 0 (got to beginning of file)
read 0 - 1000000

That final read at the beginning of the file is the one that we believe returns -1, and EIO.

That read at the beginning of the file does seem to kick off a storm of cl_page_find0() calls. It goes a little like this:

cl_io_rw_init()) io range: 0 [0, 1000000) 0 0
lov_io_rw_iter_init()) stripe: 0 chunk: [0, 1000000) 1000000
lov_io_iter_init()) shrink: 0 [0, 1000000)
ccc_io_one_lock_index()) lock: 1 [0, 244]
vvp_ioread_start()) read: -> [0, 1000000)
cl_page_find0()) 0@[FID X] address 0 1
cl_page_find0()) 0@[FIX Y] address address 1
(next two lines of 1@, then 2@, and so one for quite some time, presumably one for each OST which is why FID X and FID Y repeat)

NOTE that the following is from a different log file, so the page numbers go larger than the above file sizes would imply. This second run created a larger file.

Is it significant that there is vmpage->private is NULL for the one fid? At the page 256, the pages start having a real address in the vmpage->private pointer for FID X.

The cl_page_find0 calls fall out of lockstep later on. Some times a long sequence of page numbers for one FID, sometimes the other starts reapearing for a bit.

The cl_page_find0 continues up until page 7186 for FID X, but for FID Y it only appears to go up to 3583.

Actually, there appears to be a third "FID" mixed in there as well. I am not sure what to make of that.

But with the very large number of pages that are being requested after the user asks for the first 1000000 bytes, I am going to suspect that readahead is the problem here.

Note that the application completely ignores the return code and continues to read like so:

read 1000000 - 2000000
read 2000000 - 3000000
etc.

as far as I know, they don't see read errors on the following 1000000 byte reads.

When they finish that read, the code now seeks back to 0 and attempts the whole-file read a second time. This time it works and the checksum validates.

Comment by Jinshan Xiong (Inactive) [ 17/Apr/12 ]

If your file is striped this is why you see so many different FIDs. Generally speaking, you can only see NULL vmpage->private at ll_prepare_write() and ll_readapge(), otherwise private must point to cl_page data structure.

I can't get any clue from the file access pattern because it looks okay for me. I suppose you didn't see any occurrence of -5 in the log, so the 1st thing is to make sure where -EIO came from.

I'm going to work out a new patch to add more debug info - this won't slow down your performance because they should be printed at error path.

Comment by Jinshan Xiong (Inactive) [ 17/Apr/12 ]

Please also tell me which kernel it's running.

Before I'm submitting the patch, can you please take a look at the log again to check if it has error at the following locations:

1. vvp_page_completion_read(): CL_PAGE_HEADER(D_PAGE, env, page, "completing READ with %d\n", ioret);
2. ll_cl_init():         CDEBUG(D_VFSTRACE, "%lu@"DFID" -> %d %p %p\n",
                                vmpage->index, PFID(lu_object_fid(&clob->co_lu)), result,
                                env, io);

Also, I'm curious how you know -EIO is returned since application doesn't check return code.

Comment by Christopher Morrone [ 17/Apr/12 ]

So far I haven't been able to find the -5 in the logs. I'll check the two places that you asked about.

We did get the library developer to add a check of read() return codes, so now it prints out a message. It also opens a non-existent file in lustre to give me a marker in the lustre logs for where they think they got an error code.

But the application does not apparently abort operation at that point, it just adds those diagnostics.

Comment by Christopher Morrone [ 17/Apr/12 ]

Also note that at LLNL our default stripe count is 2. I don't think this app/library changes that but I will check. So that is why I was confused about seeing what looked like more than 2 FIDs.

Comment by Christopher Morrone [ 17/Apr/12 ]

Kernel is from RHEL6.2 and patched by LLNL. 2.6.32-220.7.1.7chaos.

Comment by Christopher Morrone [ 17/Apr/12 ]

Jinshan, I not see any errors reported by vvp_page_completion_read or ll_cl_init.

Comment by Christopher Morrone [ 17/Apr/12 ]

I disabled read-ahead by setting max_read_ahead_mb to 0. Our very reliable reproducer is no longer appears to be hitting any problems. So I think we can safely say that read-ahead is involved.

We'll need to figure out where the error originates, like you had said.

Comment by Jinshan Xiong (Inactive) [ 17/Apr/12 ]

Did you see debug message printed for the 1st page at:

vvp_page_completion_read(): CL_PAGE_HEADER(D_PAGE, env, page, "completing READ with %d\n", ioret);

when the suspicious read happened?

It sounds like kernel doesn't get an UPTODATE page after ll_readpage() is issued. I'm not sure how this is related to readahead because readahead window must reset after it seeks to the beginning of file. Anyway, please apply the following patch and check if we can discover something.

http://review.whamcloud.com/2564

Comment by Christopher Morrone [ 17/Apr/12 ]

It does not look to me like read-ahead is reset after a seek to the beginning of the file. The thread that seeks to the beginning of the file and performs a single read operation on the range 0-1000000 bytes kicks off a request for far more pages that requested (roughly 10,000 pages). Is there some mechanism other than read-ahead that would do that?

I will try you patch, but testing code on a production system isn't easy. That will take time. I can check for negative return codes from cl_io_read_page using systemtap relatively quickly.

Comment by Christopher Morrone [ 17/Apr/12 ]

Did you see debug message printed for the 1st page at:

No, I can't find that. I see the task in question print a long series of cl_page_find0 lines, followed by a long series of cl_req_page_add() lines.

There are certainly other threads the "completing READ" messages, but not ones that seem to be associated with the pages from this task.

Comment by Jinshan Xiong (Inactive) [ 17/Apr/12 ]

After readahead window is reset, it will read the file range that current read syscall covers, that's 245 pages in your case. I smell something is wrong here. Let me check the code.

Comment by Christopher Morrone [ 17/Apr/12 ]

If my systemtap script is accurate, cl_io_read_page() does not return less than 0 at any time during the job run.

Comment by Christopher Morrone [ 17/Apr/12 ]

cl_io_submit_rw() dow not return less than 0 either.

Comment by Jinshan Xiong (Inactive) [ 17/Apr/12 ]

I think I've found the problem, so you don't need to apply that patch. I'll post my fix soon.

Comment by Christopher Morrone [ 17/Apr/12 ]

Great!

Comment by Jinshan Xiong (Inactive) [ 18/Apr/12 ]

please check patch set 2 at: http://review.whamcloud.com/2564 for a fix.

Comment by Christopher Morrone [ 18/Apr/12 ]

Excellent job on this fix!

I installed the patch and ran the reproducer, and it does appear to have fixed the problem. The reproducer has made it through hundreds of IO cycles, when it usually hit the problem in the first few.

So please do finalize/review/land this patch.

Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,sles11,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,el6,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,server,el6,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,server,el5,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » i686,server,el5,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » i686,server,el6,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,server,el5,ofa #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » i686,server,el5,ofa #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » i686,client,el6,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,el5,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » i686,client,el5,ofa #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » x86_64,client,el5,ofa #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 20/Apr/12 ]

Integrated in lustre-b2_1 » i686,client,el5,inkernel #46
LU-1320 llite: fix a race between readpage and releasepage (Revision f1c5f82f703530dd5ec5806c3c350ffee56ffbf6)

Result = SUCCESS
Oleg Drokin : f1c5f82f703530dd5ec5806c3c350ffee56ffbf6
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,client,sles11,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,server,el5,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,server,el5,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,client,el6,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,client,el6,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,client,el5,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,client,el5,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,server,el6,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,server,el6,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,server,el5,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,server,el5,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,client,el6,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,client,el6,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » x86_64,client,el5,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,client,el5,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,server,el6,ofa #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 23/Apr/12 ]

Integrated in lustre-master » i686,server,el6,inkernel #493
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,client,el5,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » i686,client,el6,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » i686,server,el5,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,server,el6,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » i686,client,el5,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,server,el5,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,client,el6,inkernel #340
LU-1320 llite: fix a race between readpage and releasepage (Revision f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1)

Result = SUCCESS
Oleg Drokin : f88a39f7e2e5b2b0d15119e6390da7ef9b7fe6e1
Files :

  • lustre/llite/rw26.c
Generated at Sat Feb 10 01:15:36 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.