[LU-983] On a 1.8.6 client/host, tar is performing 10K reads, and the RPCs are typically one page in size for small files==tar is slow for small files using 1.8.6 vs 1.8.5 Created: 11/Jan/12  Updated: 01/Jun/12  Resolved: 01/Jun/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 1.8.6
Fix Version/s: Lustre 2.2.0, Lustre 2.1.2, Lustre 1.8.8

Type: Bug Priority: Major
Reporter: James Karellas Assignee: Di Wang
Resolution: Fixed Votes: 0
Labels: performance
Environment:

cent 5.5/5.6 on lustre servers, sles11sp1 client (both ofed 1.5.2 and 1.5.3.1 have been tested on sles11sp1 clients)


Attachments: File git.lg.nas-1.8.5-5     File git.lg.oracle-1.8.5     File wcdebug.out.gz     File wcdebug.out.gz    
Severity: 3
Epic: client, performance
Rank (Obsolete): 4698

 Description   

This is all with respect to small files. Large files seem to be ok.

We have been focusing on tar, but suspect the problem is with small
file read performance in general on 1.8.6 clients. We are aware that
WC has a version of tar (LU-682), which we tried but it didn't help at all.

On a 1.8.5 host, tar is performing 10K reads, and the RPCs
are typically falling into the 128 page bin of the histogram. Tar performs
as expected.

On a 1.8.6 host, tar is performing 10K reads, and the RPCs are typically
one page in size. That's bad.

We are having some problems with "collectl" on 1.8.6 filesystems or we'd
have better data for you.

As far as real world issues, we have several complaints form user that are
doing small IO. Jay lan reported this to wc-discuss but we havne't seen
antyhing come across yet. Here is what he reported:

Our users reported a serious performance issue in 1.8.6. The time
needed to tar a directory of 14k files (total size 6.3G, tar file
is stripe size 30) is ~6 minutes in lustre1.8.5, but ~120 minutes
using 1.8.6. Our nas-1.8.6 is very close to 1.8.6-wc release, but
our nas-1.8.5 was based on LLNL's version.

Is there a known issue on tar'ing large number of small files in
1.8.6? I am aware of the lustre-tar and downloaded the rpm
from the Whamcloud site for our admin. That version does
not seem to help.



 Comments   
Comment by Peter Jones [ 12/Jan/12 ]

Lai

Could you please look into this one?

Thanks

Peter

Comment by Andreas Dilger [ 12/Jan/12 ]

At a first guess, it may be that max_read_ahead_whole_mb is being ignored for some reason. This should cause the whole file (for file size < max_read_ahead_whole_mb, default 2MB) to be read at the client.

The other issue is why does tar only do 10kB reads? Lustre is reporting a 2MB blocksize to stat(2), so tar should be using this as the preferred IO size. I agree that this is not the core issue, since even a 10kB read should be triggering at least a 12kB RPC initially, and then readahead due to sequential access.

We need to also investigate what patches exist in the LLNL 1.8.5 above stock Oracle 1.8.5, since I recall Brian Behlendorf working on the readahead code, and possibly this was not landed to 1.8.6? I think there is a bugzilla bug with their patch list.

Comment by James Karellas [ 12/Jan/12 ]

As a data point, we tried an unpatched client version of 1.8.6 and 1.8.7 with no change in behavior. We are working on getting you a set of our own patchs that we've added to both 1.8.6 and 1.8.5. That will be uploaded in a bit.

Comment by Jay Lan (Inactive) [ 12/Jan/12 ]

Attached are short form of git logs of two branches: oracle-1.8.5 and nas-1.8.5-5.

The commits can be seen at
https://github.com/jlan/lustre-nas/commits/nas-1.8.5

The nas-1.8.5-5 branch was created on tag "1.8.5.0-5nas". The oracle-1.8.5 branch was created using "1.8.5" tag of b1_8 branch of oracle.

Comment by Jay Lan (Inactive) [ 12/Jan/12 ]

Since the vanilla 1.8.6-wc1 build displayed the same rpc problem of tar as in the nas version of 1.8.6, I assume you do not need the diffs of 1.8.6. Let me know if you still want one.

Comment by Jay Lan (Inactive) [ 12/Jan/12 ]

I just built a vanilla 1.8.5 lustre client (using tag "1.8.5" off b1_8 branch of oracle tree). It took 8min27sec. Not as good as ~6min of our 1.8.5-5nas, but was great comparing with ~120min as in original report.

The rpc_stats showed 77% of 2 pages-per-rpc in read.

read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 31 5 5 | 0 0 0
2: 415 77 83 | 0 0 0
4: 5 0 84 | 2 1 1
8: 3 0 84 | 0 0 1
16: 2 0 85 | 0 0 1
32: 14 2 87 | 1 0 2
64: 56 10 98 | 4 3 5
128: 8 1 99 | 106 84 90
256: 1 0 100 | 12 9 100

[NOTE] I reran the test on Jan 13 with the same binary. The time was back to < 6min
and most of rpc READ were in 128-page bin. Do not know why, but we can ignore
this comment.

Comment by Jay Lan (Inactive) [ 14/Jan/12 ]

The offending commit is b7eb1d7: LU-15 slow IO with read-intense application.

After reverting that commit, the small files tar performance is back. Most rpc's were using 128-page per rpc with our 1.8.6.81 client.

snapshot_time: 1326582457.990047 (secs.usecs)
read RPCs in flight: 0
write RPCs in flight: 0
dio read RPCs in flight: 0
dio write RPCs in flight: 0
pending write pages: 0
pending read pages: 0

read write
pages per rpc rpcs % cum % | rpcs % cum %
1: 0 0 0 | 0 0 0
2: 0 0 0 | 0 0 0
4: 0 0 0 | 0 0 0
8: 0 0 0 | 0 0 0
16: 0 0 0 | 0 0 0
32: 0 0 0 | 0 0 0
64: 0 0 0 | 0 0 0
128: 106 89 89 | 0 0 0
256: 13 10 100 | 0 0 0

read write
rpcs in flight rpcs % cum % | rpcs % cum %
0: 119 100 100 | 0 0 0

read write
offset rpcs % cum % | rpcs % cum %
0: 119 100 100 | 0 0 0

Comment by Peter Jones [ 17/Jan/12 ]

Wangdi

Could you please look at this report and see if you can understand why your patch from LU15 could be responsible for this behaviour?

Thanks

Peter

Comment by Di Wang [ 17/Jan/12 ]

Are you running the tar with multiple threads? Could you please post the tar command line here?
And could you please collect the debug log on client side for me when you run tar?

lctl set_param debug="+reada +vfstrace"
lctl set_param debug_mb="50"
run tar
lctl dk > /tmp/debug.out

And post the debug log here or somewhere I can get?

Comment by Di Wang [ 17/Jan/12 ]

Could you please tell these three read ahead setting of your FS

lctl get_param llite.*.max_read_ahead_whole_mb
lctl get_param llite.*.max_read_ahead_per_file_mb
lctl get_param llite.*.max_read_ahead_mb

Comment by Jay Lan (Inactive) [ 18/Jan/12 ]

service33 /nobackupp6/jlan # lctl set_param debug="+reada +vfstrace"
lnet.debug=+reada +vfstrace
service33 /nobackupp6/jlan # lctl set_param debug_mb="50"
lnet.debug_mb=50
service33 /nobackupp6/jlan # lctl get_param llite.*.max_read_ahead_whole_mb
llite.nbp6-ffff88041aa24000.max_read_ahead_whole_mb=2
service33 /nobackupp6/jlan # lctl get_param llite.*.max_read_ahead_per_file_mb
llite.nbp6-ffff88041aa24000.max_read_ahead_per_file_mb=40
service33 /nobackupp6/jlan # lctl get_param llite.*.max_read_ahead_mb
llite.nbp6-ffff88041aa24000.max_read_ahead_mb=40
service33 /nobackupp6/jlan #

The test command I ran was:

  1. time tar -cf lustre-1.8.6-wc1.tar small_files_directory

I let it run for 1min 20min before terminated it. The debug output file is uploaded "wcdebug.output.gz".

Comment by Jay Lan (Inactive) [ 18/Jan/12 ]

The debug output file in gz of running "tar -cf" command.

Comment by Di Wang [ 18/Jan/12 ]

Thanks! I post a new patch here. http://review.whamcloud.com/#change,1983
Could you please check whether it fix your problem?
And please provide me the debug log as well.

Comment by Jay Lan (Inactive) [ 18/Jan/12 ]

Di, your patch worked for me.

  1. time tar -cf wc-LU15-fix.tar vertconvfoot-ASCENDS-20120102.13.39.27.UTC-20070723

real 5m18.394s
user 0m0.400s
sys 0m26.890s

The rpc read fell mostly on 128-page bin as expected.

Thanks!

Comment by Kent Engström (Inactive) [ 07/Feb/12 ]

An added datapoint. I just tried this patch while building Lustre to test a patch for the unrelated LU-974 issue.

My "unpatched client" is:

116f41f LU-987 build: Fail to create ldisk rpms
4ef1c46 (review-1976-3) LU-974 security: ignore umask if acl enabled
cc5c6bd LU-358 Add the branch and commiti-id to the yml.
66cd9a7 LU-534 test: nfsread_orphan_file test
069d0b6 LU-534 mds: correct assertion
e7c7f04 LU-955 build: fix bad lustre-backend-fs dependency
dca979f LU-805 quota: lfs quota doesn't print grace time correctly
4ebec99 LU-649 io: DIO doesn't need lock i_mutex
...

(the LU-974 patch from review.whamcloud.com via git, with the LU-987 patch cherry-picked on top).

My "patched client" is the same, but with

91776c2 LU-983 llite: align readahead to 1M after ra_max adjustment

cherry-picked on top of the other commits.

I tested against a directory containing 10001 files, each 128 kilobytes in size.
The directory was on a filesystem (in use at the time) spread over 54 OSTs, 3 per OSS, with 1G ethernet
to each OSS and to the test client machine.

On the test client I ran tar, writing the tar file to local disk:
echo 3 > /proc/sys/vm/drop_caches
time tar cf /root/tartestdir.tar /mnt/lustrefilesystem/tartestdir

With the unpatched client, this took 3m 35s (best of three attempts, worst was 3m 38s).
With the patched client, this took 50s (best of four attempts, worst was 1m 22s).
Speedup ~ 4.4x

Comment by Kent Engström (Inactive) [ 16/Feb/12 ]

Will this patch (http://review.whamcloud.com/#change,1983) be landed on the b1_8 branch?

Comment by Peter Jones [ 16/Feb/12 ]

Yes it should land on b1_8 and master

Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » x86_64,server,el5,ofa #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » x86_64,client,el5,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » x86_64,client,ubuntu1004,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » x86_64,client,el5,ofa #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » x86_64,server,el5,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » i686,client,el5,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » i686,client,el5,ofa #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » i686,server,el5,ofa #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » i686,server,el5,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » x86_64,client,el6,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 23/Feb/12 ]

Integrated in lustre-b1_8 » i686,client,el6,inkernel #176
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision f1a4b79e378407e4161c2a922478d625a38452b5)

Result = SUCCESS
Johann Lombardi : f1a4b79e378407e4161c2a922478d625a38452b5
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,server,el5,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,client,el5,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,server,el5,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,client,el5,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,client,ubuntu1004,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,client,el5,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,client,sles11,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,client,el5,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,client,el6,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,server,el5,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,server,el6,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,server,el5,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,server,el6,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,server,el6,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » i686,client,el6,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,client,el6,ofa #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,client,el6,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 05/Mar/12 ]

Integrated in lustre-master » x86_64,server,el6,inkernel #503
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,client,el5,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » i686,client,el6,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » i686,server,el5,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,server,el6,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » i686,client,el5,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,server,el5,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/tests/sanity.sh
  • lustre/llite/rw.c
Comment by Build Master (Inactive) [ 02/May/12 ]

Integrated in lustre-dev » x86_64,client,el6,inkernel #340
LU-983 llite: align readahead to 1M after ra_max adjustment (Revision 5093fef8392c866f1998d6e68b6536d10daabb4d)

Result = SUCCESS
Oleg Drokin : 5093fef8392c866f1998d6e68b6536d10daabb4d
Files :

  • lustre/llite/rw.c
  • lustre/tests/sanity.sh
Comment by Peter Jones [ 01/Jun/12 ]

Landed for 1.8.8, 2.1.2 and 2.2

Generated at Sat Feb 10 01:12:22 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.