Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-10948

client cache open lock after N opens

Details

    • Improvement
    • Resolution: Unresolved
    • Minor
    • None
    • Lustre 2.9.0
    • None
    • cent server/sles client
    • 9223372036854775807

    Description

      listed as minor but when a user does this, we start to get phone calls form other users and then page POC to identify code/user. Workaround is to terminate user job(s).

      Oleg has said that Lustre has an existing feature for a client to acquire open lock but off by default. This to mimic NFS behavior.

      Ideal change would be that we can specify a number of times that a file is opened on single client at which time lock is acquired. (e.g. 10th time)

      Use case is naive user who loop in this way on like 5000+ threads in java:

      do until till the sun turns black()

      { fd = open(*my_thread_ID, O_APPEND) calculate_something_small_but_useful() write(fd, *fortytwo, 42) close(fd }

      Users often don't have complete control over the code they run and as a result may not be able to quickly make even simple changes.

      Attachments

        Issue Links

          Activity

            [LU-10948] client cache open lock after N opens

            Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/35039/
            Subject: LU-10948 mdt: Remove openlock compat code with 2.1
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: f4e39f710f9069208594870b5cdd37879b46a404

            gerrit Gerrit Updater added a comment - Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/35039/ Subject: LU-10948 mdt: Remove openlock compat code with 2.1 Project: fs/lustre-release Branch: master Current Patch Set: Commit: f4e39f710f9069208594870b5cdd37879b46a404

            Oleg Drokin (green@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/35039
            Subject: LU-10948 mdt: Remove openlock compat code with 2.1
            Project: fs/lustre-release
            Branch: master
            Current Patch Set: 1
            Commit: 1f0f1746f2b2f368e76b8b188f85efe96a7f69e3

            gerrit Gerrit Updater added a comment - Oleg Drokin (green@whamcloud.com) uploaded a new patch: https://review.whamcloud.com/35039 Subject: LU-10948 mdt: Remove openlock compat code with 2.1 Project: fs/lustre-release Branch: master Current Patch Set: 1 Commit: 1f0f1746f2b2f368e76b8b188f85efe96a7f69e3

            Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/32157/
            Subject: LU-10948 llite: Revalidate dentries in ll_intent_file_open
            Project: fs/lustre-release
            Branch: master
            Current Patch Set:
            Commit: 14ca3157b21d8bd22be29c9578819b72fd39a1e5

            gerrit Gerrit Updater added a comment - Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/32157/ Subject: LU-10948 llite: Revalidate dentries in ll_intent_file_open Project: fs/lustre-release Branch: master Current Patch Set: Commit: 14ca3157b21d8bd22be29c9578819b72fd39a1e5

            Ah, sorry, that was ambiguous.  Reduction in the open rate, so, bad news.

            On a real system, this mdsrate open benchmark drops from 35K opens/second to around 11K.
            This is 10K opens per process, 64 processes, opens are random files from among 300000 existing files (created by mdsrate earlier): 
            aprun -n 64 /usr/lib64/lustre/tests/mdsrate -d /mnt/lustre/mdsrate --open --iters 10000 --nfile=300000

            On a much smaller VM, I see a drop from 8K opens/second to 4K with this benchmark.
            This is 8K opens per process, 4 processes, opens randomly selected from among 30000 existing files:
            mpirun -n 4 /usr/lib64/lustre/tests/mdsrate -d /mnt/lustre/mdsrate --open --iters 8000 --nfile=30000

            As discussed elsewhere, I'll open an LU for this in a few minutes.

            paf Patrick Farrell (Inactive) added a comment - Ah, sorry, that was ambiguous.  Reduction in the open rate, so, bad news. On a real system, this mdsrate open benchmark drops from 35K opens/second to around 11K. This is 10K opens per process, 64 processes, opens are random files from among 300000 existing files (created by mdsrate earlier):  aprun -n 64 /usr/lib64/lustre/tests/mdsrate -d /mnt/lustre/mdsrate --open --iters 10000 --nfile=300000 On a much smaller VM, I see a drop from 8K opens/second to 4K with this benchmark. This is 8K opens per process, 4 processes, opens randomly selected from among 30000 existing files: mpirun -n 4 /usr/lib64/lustre/tests/mdsrate -d /mnt/lustre/mdsrate --open --iters 8000 --nfile=30000 As discussed elsewhere, I'll open an LU for this in a few minutes.

            Patrick, when you say "reduction in the mdsrate open() benchmark", does that mean "reduction in the time taken" (== good), or "reduction in the open rate" (== bad)?

            adilger Andreas Dilger added a comment - Patrick, when you say "reduction in the mdsrate open() benchmark", does that mean "reduction in the time taken" (== good), or "reduction in the open rate" (== bad)?

            For what it's worth, we've observed a 70-80% reduction in the mdsrate open() benchmark from the change that https://review.whamcloud.com/#/c/32156/ reverses.

            Just something to consider - The impact of that change alone is enormous.

            paf Patrick Farrell (Inactive) added a comment - For what it's worth, we've observed a 70-80% reduction in the mdsrate open() benchmark from the change that https://review.whamcloud.com/#/c/32156/  reverses. Just something to consider - The impact of that change alone is enormous.

            Here is comparing the C code vs FORTRAN. Including strace of the FORTRAN.

             C 100 iterations
            ----------------  
            /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/md_stats
            close               9
            intent_lock         11
            /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/stats
            req_waittime        29
            req_active          29
            mds_close           9
            ldlm_cancel         9
            /proc/fs/lustre/llite/nbptest2-ffff88203d2dd800/stats
            write_bytes         100
            open                100
            close               100
            getxattr            10
            getxattr_hits       9
            inode_permission    200
            
            /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/ldlm_stats
            ldlm_enqueue        11
            ldlm_cancel         9
            /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/stats
            open                10
            close               9
            getxattr            1
            
            Fortran 100 iterations
            ---------------------
            /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/md_stats
            close               99
            intent_lock         302 
            setattr             100
            /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/stats
            req_waittime        497
            req_active          497
            mds_close           99
            ldlm_cancel         99
            obd_ping            1
            /proc/fs/lustre/llite/nbptest2-ffff88203d2dd800/stats
            write_bytes         100
            ioctl               200
            open                100
            close               100
            seek                200
            truncate            100
            getattr             101
            getxattr            100
            getxattr_hits       100 
            inode_permission    406
            
            /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/ldlm_stats
            ldlm_enqueue        198
            ldlm_cancel         99
            /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/stats
            open                99
            close               99
            getattr             100
            setattr             100
            

             

            fortran.100iter.strace

            mhanafi Mahmoud Hanafi added a comment - Here is comparing the C code vs FORTRAN. Including strace of the FORTRAN. C 100 iterations ---------------- /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/md_stats close 9 intent_lock 11 /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/stats req_waittime 29 req_active 29 mds_close 9 ldlm_cancel 9 /proc/fs/lustre/llite/nbptest2-ffff88203d2dd800/stats write_bytes 100 open 100 close 100 getxattr 10 getxattr_hits 9 inode_permission 200 /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/ldlm_stats ldlm_enqueue 11 ldlm_cancel 9 /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/stats open 10 close 9 getxattr 1 Fortran 100 iterations --------------------- /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/md_stats close 99 intent_lock 302 setattr 100 /proc/fs/lustre/mdc/nbptest2-MDT0000-mdc-ffff88203d2dd800/stats req_waittime 497 req_active 497 mds_close 99 ldlm_cancel 99 obd_ping 1 /proc/fs/lustre/llite/nbptest2-ffff88203d2dd800/stats write_bytes 100 ioctl 200 open 100 close 100 seek 200 truncate 100 getattr 101 getxattr 100 getxattr_hits 100 inode_permission 406 /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/ldlm_stats ldlm_enqueue 198 ldlm_cancel 99 /proc/fs/lustre/mdt/nbptest2-MDT0000/exports/10.151.31.132@o2ib/stats open 99 close 99 getattr 100 setattr 100   fortran.100iter.strace
            green Oleg Drokin added a comment -

            does the result change if you only run one instance of it, not 10 (Are all 10 on the same node)?

            Do you have ability to test on 2.11? I wonder if it's just some 2.10 difference that makes this not work as expected, though cursory check does not seem to indicate anything like that.

            If a single-process test still results in elevated open counts, please try my C reproducer, if that one works as expected, please run your reproducer under strace.

            green Oleg Drokin added a comment - does the result change if you only run one instance of it, not 10 (Are all 10 on the same node)? Do you have ability to test on 2.11? I wonder if it's just some 2.10 difference that makes this not work as expected, though cursory check does not seem to indicate anything like that. If a single-process test still results in elevated open counts, please try my C reproducer, if that one works as expected, please run your reproducer under strace.

            This is my test case.

             

                   program multi_stats
            ! compile with ifort -o multi_stats multi_stats.f -lmpi
                  use mpi
                  integer (kind=8), parameter :: niter=10000
                  integer (kind=8) :: i
                  real (kind=8) :: t0, t1, t2
                  logical :: ex
                  character*50 :: filename
                  call mpi_init(ierr)
                  call mpi_comm_rank(mpi_comm_world, myid, ierr)
                  call mpi_comm_size(mpi_comm_world, nprocs, ierr)      t0 = mpi_wtime()
                  write(filename,'(a,i0.8)') "test.", myid
            !      print *, 'my filename is', filename
                  open(10, file=filename, status="new", IOSTAT=IERR)
                  close(10)
                  do i = 1,niter
                     open(10, file=filename, status='old', position='append')
                     write ( 10,*) "test", i
                     close(10)
            !      if (myid .eq. nprocs-1) write(0,*) i
            !      call sleep(1)
                  call mpi_barrier(mpi_comm_world, ierr)
                  enddo
             60   call mpi_barrier(mpi_comm_world, ierr)
                  t1 = mpi_wtime()
                  if (myid .eq. 0) print *, 'Total runtime = ',t1 - t0
                  call mpi_finalize(ierr)
                  end 
            
            mhanafi Mahmoud Hanafi added a comment - This is my test case.   program multi_stats ! compile with ifort -o multi_stats multi_stats.f -lmpi use mpi integer (kind=8), parameter :: niter=10000 integer (kind=8) :: i real (kind=8) :: t0, t1, t2 logical :: ex character*50 :: filename call mpi_init(ierr) call mpi_comm_rank(mpi_comm_world, myid, ierr) call mpi_comm_size(mpi_comm_world, nprocs, ierr) t0 = mpi_wtime() write(filename, '(a,i0.8)' ) "test." , myid ! print *, 'my filename is' , filename open(10, file=filename, status= " new " , IOSTAT=IERR) close(10) do i = 1,niter open(10, file=filename, status= 'old' , position= 'append' ) write ( 10,*) "test" , i close(10) ! if (myid .eq. nprocs-1) write(0,*) i ! call sleep(1) call mpi_barrier(mpi_comm_world, ierr) enddo 60 call mpi_barrier(mpi_comm_world, ierr) t1 = mpi_wtime() if (myid .eq. 0) print *, 'Total runtime = ' ,t1 - t0 call mpi_finalize(ierr) end
            green Oleg Drokin added a comment -

            Thanks for the results!

            Yes, the change 32156 is server-side and it's a pretty important part of the set, that's why when you don't patch the server you don't observe any positive impact.

            That said, your open counts are still elevated which means at least something is not working as planned unless I misunderstood your testcase.

            green Oleg Drokin added a comment - Thanks for the results! Yes, the change 32156 is server-side and it's a pretty important part of the set, that's why when you don't patch the server you don't observe any positive impact. That said, your open counts are still elevated which means at least something is not working as planned unless I misunderstood your testcase.

            Attaching our testing results.

            1. We observed big difference in run time.
            2. Non-patched clients with patched clients had the same results as patched clients.

            LU-10948_testing.pdf

             

            mhanafi Mahmoud Hanafi added a comment - Attaching our testing results. We observed big difference in run time. Non-patched clients with patched clients had the same results as patched clients. LU-10948_testing.pdf  

            People

              green Oleg Drokin
              Bob.C Bob Ciotti (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

                Created:
                Updated: