[LU-1109] NFS server not responding when running parallel-scale test_iorsff Created: 15/Feb/12  Updated: 07/Jun/12  Resolved: 29/Feb/12

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.2.0
Fix Version/s: Lustre 2.2.0, Lustre 2.1.2

Type: Bug Priority: Blocker
Reporter: Sarah Liu Assignee: Jinshan Xiong (Inactive)
Resolution: Fixed Votes: 0
Labels: None
Environment:

server/client: 2.1.55 RHEL6-x86_64


Attachments: File 1109.tar.gz     File sarah-nfs-nfs-client.sh    
Severity: 3
Rank (Obsolete): 4710

 Description   

Hit the following error when running iorssf over NFS v3

Lustre: DEBUG MARKER: == parallel-scale test iorssf: iorssf == 13:45:23 (1329342323)
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying



 Comments   
Comment by Sarah Liu [ 15/Feb/12 ]

dmesg and trace from the NFS server(lustre client)

Comment by Sarah Liu [ 15/Feb/12 ]

The same error seen when running iorfpp. There is similar bug for this issue LU-849

Lustre: DEBUG MARKER: == parallel-scale test iorfpp: iorfpp == 16:01:10 (1329350470)
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying
nfs: server 10.10.4.15 not responding, still trying

Comment by Sarah Liu [ 16/Feb/12 ]

NFS v4 got the same issues too. both iorsff and iorfpp failed.

Comment by Peter Jones [ 16/Feb/12 ]

Fanyong

Could you please look into this one?

Thanks

Peter

Comment by nasf (Inactive) [ 17/Feb/12 ]

Hi Sarah, can you show me your test parameters? I cannot reproduce the failure, My test parameters are:

build#: lustre-master 479
Arch: x86_64
client-16vm1: nfs-client
client-16vm2: nfs-client
client-16vm3: Lustre-MDS, Lustre-client, nfs-server
client-16vm4: Lustre-OSS

commands on client-16vm1:
PDSH="pdsh -t 300 -S -w" DSH="rsh" NAME=ncli RCLIENTS="client-16vm2" mds_HOST=client-16vm3 MDSDEV1=/dev/vda5 MDSSIZE=2000000 ost_HOST=client-16vm4 OSTCOUNT=3 OSTSIZE=4000000 OSTDEV1=/dev/vda5 OSTDEV2=/dev/vda6 OSTDEV3=/dev/vda7 MDS_MOUNT_OPTS="-o user_xattr,acl" OST_MOUNT_OPTS=" " DEBUG_SIZE=48 SHARED_DIRECTORY="/home/nasf/test_logs" cbench_DIR=/usr/bin cnt_DIR=/opt/connectathon DBENCH_LIB=/usr/share/dbench MIRUN_OPTIONS="-mca orte_rsh_agent rsh:ssh" ENABLE_QUOTA=yes bash auster -d /home/nasf/test_logs -r -s -v -k parallel-scale-nfsv4 --only iorssf iorfpp

PDSH="pdsh -t 300 -S -w" DSH="rsh" NAME=ncli RCLIENTS="client-16vm2" mds_HOST=client-16vm3 MDSDEV1=/dev/vda5 MDSSIZE=2000000 ost_HOST=client-16vm4 OSTCOUNT=3 OSTSIZE=4000000 OSTDEV1=/dev/vda5 OSTDEV2=/dev/vda6 OSTDEV3=/dev/vda7 MDS_MOUNT_OPTS="-o user_xattr,acl" OST_MOUNT_OPTS=" " DEBUG_SIZE=48 SHARED_DIRECTORY="/home/nasf/test_logs" cbench_DIR=/usr/bin cnt_DIR=/opt/connectathon DBENCH_LIB=/usr/share/dbench MIRUN_OPTIONS="-mca orte_rsh_agent rsh:ssh" ENABLE_QUOTA=yes bash auster -d /home/nasf/test_logs -r -s -v -k parallel-scale-nfsv3 --only iorssf iorfpp

Above test cases succeed.

Comment by Sarah Liu [ 17/Feb/12 ]

hmm, actually I setup the environment myself and ran parallel-scale.sh instead of parallel-scale-nfsv3/4.sh. Today I tried parallel-scale-nfsv3.sh, same error.

The configuration I used today is, and please find the conf file in the attached.
client-5: nfs client
fat-amd-3: mds/nfs server/lustre client
fat-intel-4: oss

[root@fat-amd-3 ~]# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
/dev/sdc1 on /mnt/mds1 type lustre (rw,user_xattr,acl)
192.168.4.134@o2ib:/lustre on /mnt/lustre type lustre (rw,user_xattr,acl)
[root@client-5 ~]# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.4.134:/mnt/lustre on /mnt/lustre type nfs (rw,nfsvers=3,addr=192.168.4.134)

sh auster -r -f sarah-nfs-nfs-client parallel-scale-nfsv3 --only "iorssf iorfpp"

Lustre: DEBUG MARKER: == parallel-scale-nfsv3 test iorssf: iorssf == 12:55:01 (1329512101)
nfs: server 192.168.4.134 not responding, still trying
nfs: server 192.168.4.134 not responding, still trying

Comment by Sarah Liu [ 17/Feb/12 ]

conf file I used for nfs testing

Comment by nasf (Inactive) [ 22/Feb/12 ]

It may be related with flock process. Sarah, would you please to mount Lustre client with the option "-o flock", and re-test?

Comment by Sarah Liu [ 22/Feb/12 ]

Fan Yong,

I reran the tests with -o flock on NFSv3, iorssf pass while iorfpp still failed

Lustre: DEBUG MARKER: == parallel-scale-nfsv3 test iorssf: iorssf == 20:38:08 (1329971888)
Lustre: DEBUG MARKER: == parallel-scale-nfsv3 test iorfpp: iorfpp == 20:40:22 (1329972022)
nfs: server 192.168.4.134 not responding, still trying

[root@fat-amd-3 ~]# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
/dev/sdc1 on /mnt/mds1 type lustre (rw,user_xattr,acl)
192.168.4.134@o2ib:/lustre on /mnt/lustre type lustre (rw,user_xattr,acl,flock)

Comment by nasf (Inactive) [ 23/Feb/12 ]

=====================
Feb 22 22:32:59 fat-amd-3 kernel: nfsd S 0000000000000001 0 4776 2 0x00000080
Feb 22 22:32:59 fat-amd-3 kernel: ffff8801051d55c0 0000000000000046 ffff8801051d55e0 ffffffff812730be
Feb 22 22:32:59 fat-amd-3 kernel: ffff8801051d5560 000000008d7427f5 0000000200000010 ffff8801051d5560
Feb 22 22:32:59 fat-amd-3 kernel: ffff88011989d0f8 ffff8801051d5fd8 000000000000f4e8 ffff88011989d0f8
Feb 22 22:32:59 fat-amd-3 kernel: Call Trace:
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff812730be>] ? number+0x2ee/0x320
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff81090d7e>] ? prepare_to_wait+0x4e/0x80
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff8117fcab>] pipe_wait+0x5b/0x80
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff81090a90>] ? autoremove_wake_function+0x0/0x40
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff811a324f>] splice_to_pipe+0x1ef/0x280
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff811a4522>] __generic_file_splice_read+0x442/0x560
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0874382>] ? lov_stripe_unlock+0x22/0x60 [lov]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa08a780f>] ? lov_attr_get_raid0+0x1ff/0x2f0 [lov]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff810967ef>] ? up+0x2f/0x50
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff811a2a20>] ? spd_release_page+0x0/0x20
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff811a468a>] generic_file_splice_read+0x4a/0x90
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0985429>] vvp_io_read_start+0x369/0x3d0 [lustre]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa056d1e8>] cl_io_start+0x68/0x170 [obdclass]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0571da0>] cl_io_loop+0x110/0x1c0 [obdclass]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0929bd7>] ll_file_io_generic+0x3c7/0x580 [lustre]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0458e3b>] ? cfs_hash_add_unique+0x1b/0x40 [libcfs]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0560d88>] ? cl_env_get+0x1a8/0x360 [obdclass]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa092e82c>] ll_file_splice_read+0xac/0x230 [lustre]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff811a28cb>] do_splice_to+0x6b/0xa0
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff811a2c1f>] splice_direct_to_actor+0xaf/0x1c0
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa034e700>] ? nfsd_direct_splice_actor+0x0/0x20 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa034f6f0>] nfsd_vfs_read+0x1a0/0x1c0 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa03510b7>] nfsd_read+0x1c7/0x2e0 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff8109b090>] ? getboottime+0x30/0x40
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0358955>] nfsd3_proc_read+0xd5/0x180 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa034943e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa02b75a4>] svc_process_common+0x344/0x640 [sunrpc]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff8105e7f0>] ? default_wake_function+0x0/0x20
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa02b7be0>] svc_process+0x110/0x160 [sunrpc]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0349b62>] nfsd+0xc2/0x160 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffffa0349aa0>] ? nfsd+0x0/0x160 [nfsd]
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff81090726>] kthread+0x96/0xa0
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff8100c14a>] child_rip+0xa/0x20
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff81090690>] ? kthread+0x0/0xa0
Feb 22 22:32:59 fat-amd-3 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20
=====================

This is nfsd stack trace when server hung. All the nfsd threads were self-locked unexpected. In fact, when nfsd tries to call nfsd_vfs_read() to read something from Lustre client (backend of nfs server), it just reads pages from given file and fill them into a pipe, and then reads the context from pipe by itself. Such pipe is maintained by the thread itself. Means the nfsd thread is both the pipe writer and the pipe reader. Normally, it should not block when nfsd thread fills/writes to the pipe, but according to the kernel implementation, it is possible:

ssize_t splice_to_pipe(struct pipe_inode_info *pipe,
                       struct splice_pipe_desc *spd)
{
...
        pipe_lock(pipe);

        for (;;) {
                if (!pipe->readers) {
                        send_sig(SIGPIPE, current, 0);
                        if (!ret)
                                ret = -EPIPE;
                        break;
                }

                if (pipe->nrbufs < PIPE_BUFFERS) {
                        int newbuf = (pipe->curbuf + pipe->nrbufs) & (PIPE_BUFFERS - 1);
                        struct pipe_buffer *buf = pipe->bufs + newbuf;

                        buf->page = spd->pages[page_nr];
                        buf->offset = spd->partial[page_nr].offset;
                        buf->len = spd->partial[page_nr].len;
                        buf->private = spd->partial[page_nr].private;
                        buf->ops = spd->ops;
                        if (spd->flags & SPLICE_F_GIFT)
                                buf->flags |= PIPE_BUF_FLAG_GIFT;

                        pipe->nrbufs++;
                        page_nr++;
                        ret += buf->len;

                        if (pipe->inode)
                                do_wakeup = 1;

                        if (!--spd->nr_pages)
                                break;
                        if (pipe->nrbufs < PIPE_BUFFERS)
                                continue;

                        break;
                }
...

                pipe->waiting_writers++;
====>                pipe_wait(pipe);
                pipe->waiting_writers--;
        }

        pipe_unlock(pipe);
...
}

So if the pipe::nrbufs or spd::nr_pages are processed improperly, the nfsd thread will wait at "pipe_wait(pipe)", and nobody can wakeup it any more.

Jay, do you have any idea about where Lustre IO stack maybe affect above two variables?

Comment by Bruno Faccini (Inactive) [ 23/Feb/12 ]

May be this can help, but we got this same problem with some specific Clients running particular OS/distribs versions and were able to work-around this by reducing the rsize/wsize used by such Client below the Server pipe size.

Comment by Bruno Faccini (Inactive) [ 23/Feb/12 ]

Just some more details about which platform/reproducer triggered the problem on our site and later helped to qualify and work-aound it !!...

We have experienced the NFS-Server threads problem/hang with 2 different Clients/platforms running SLS-10/SP3 and CentOS 5.2 with their original Kernels and always with the Connectathon specific/special "write_read_mmap" test.

Comment by Bruno Faccini (Inactive) [ 23/Feb/12 ]

Oops, forgot to detail that to match Lustre default stripe-size they were originally using rsize/wsize=1MB and also that to finally workaround the issue triggered by our 2.6.32 Kernel's NFSd-Server layer (using the "splice()" feature as a pipe to xfer datas between Lustre and NFS worlds), we switched to a rsize/wsize=64KB (according to pipe-size beeing computed as PIPE_BUFFERS*PAGE_SIZE=16*4KB=64KB).

Comment by Jinshan Xiong (Inactive) [ 23/Feb/12 ]

This problem exists if nfsd is reading across stripes and first stripe buffer happens to be 64KB(PIPE_BUFFERS*PAGE_SIZE). I'll cook a patch soon.

Bruno's way is a good workaround.

Comment by Jinshan Xiong (Inactive) [ 23/Feb/12 ]

Please try patch: http://review.whamcloud.com/2182

Comment by Sarah Liu [ 25/Feb/12 ]

Hi Xiong,

I ran your patch on Juelich cluster and it seems there still some problem with the IOR test. NFS client hang there and below is the dmesg from the NFS server(lustre client)
------------------------------------------------------------------------
Lustre: MGC192.168.119.12@tcp: Reactivating import
Lustre: MGC192.168.119.12@tcp: Connection restored to service MGS using nid 192.168.119.12@tcp.
LustreError: 31587:0:(genops.c:311:class_newdev()) Device lustre-OST0001-osc-ffff8806256ba000 already exists at 5, won't add
LustreError: 31587:0:(obd_config.c:327:class_attach()) Cannot create device lustre-OST0001-osc-ffff8806256ba000 of type osc : -17
LustreError: 31587:0:(obd_config.c:1363:class_config_llog_handler()) Err -17 on cfg command:
Lustre: cmd=cf001 0:lustre-OST0001-osc 1:osc 2:lustre-clilov_UUID LustreError: 31719:0:(mdc_locks.c:719:mdc_enqueue()) ldlm_cli_enqueue: -95
LustreError: 31719:0:(file.c:2221:ll_inode_revalidate_fini()) failure -95 inode 1045761 LustreError: 31768:0:(mdc_locks.c:719:mdc_enqueue()) ldlm_cli_enqueue: -95
LustreError: 31768:0:(file.c:2221:ll_inode_revalidate_fini()) failure -95 inode 1045761

Comment by Jinshan Xiong (Inactive) [ 25/Feb/12 ]

Hi Sara,

It smells not to be the same problem. Did you have recovery test During the time IOR is running? The suspicious log is this one:

LustreError: 31719:0:(file.c:2221:ll_inode_revalidate_fini()) failure -95 inode 1045761 LustreError: 31768:0:(mdc_locks.c:719:mdc_enqueue()) ldlm_cli_enqueue: -95
LustreError: 31768:0:(file.c:2221:ll_inode_revalidate_fini()) failure -95 inode 1045761

But I'm not sure before I get the full log.

Can you please rerun the test with the following debug setttings on the nfs server(lustre client):
1. lctl set_param debug=-1
2. lctl set_param debug=-trace
3. lctl set_param debug_mb=200
4. lctl mark "XXXX IOR test starting..."

After you notice nfsd is hung, please do the following besides collecting lustre logs:
5. echo t > /proc/sysrq-trigger
6. dmesg > dmesg.txt and upload dmesg.txt file.

Thanks.

Comment by Sarah Liu [ 27/Feb/12 ]

I didn't run recovery test at that time and will keep you updated if I have any more information.

Comment by Sarah Liu [ 28/Feb/12 ]

Hi,

I reran the test on Toro instead of Juelich, both NFSv3 and v4 were pass:

https://maloo.whamcloud.com/test_sets/395bfe5a-61d7-11e1-b462-5254004bbbd3
https://maloo.whamcloud.com/test_sets/00c195ae-61d8-11e1-b462-5254004bbbd3

Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,server,el5,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Peter Jones [ 29/Feb/12 ]

Landed for 2.2

Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,client,el5,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,server,el5,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,client,el5,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,client,ubuntu1004,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,client,el6,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,client,el5,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,client,sles11,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,server,el6,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,server,el5,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,server,el6,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,client,el6,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,client,el5,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,server,el6,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » x86_64,client,el6,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,server,el5,inkernel #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,client,el6,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Comment by Build Master (Inactive) [ 29/Feb/12 ]

Integrated in lustre-master » i686,server,el6,ofa #493
LU-1109 llite: do splice read stripe by stripe (Revision 211b00d651bbc57d9ab9d24d6d7e94b013957cf1)

Result = SUCCESS
Oleg Drokin : 211b00d651bbc57d9ab9d24d6d7e94b013957cf1
Files :

  • lustre/llite/vvp_io.c
Generated at Sat Feb 10 01:13:35 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.