Details
-
Improvement
-
Resolution: Fixed
-
Minor
-
None
-
None
-
9223372036854775807
Description
Internal testing with IOR shows including the inode times i_mtime, i_atime, i_ctime on top of LU-12151 does have a significant effect on writes
Activity +/- MiB/s ------------------------- --------- Direct I/O Write + 25% Direct I/O Pre-Fill Write + 21% Direct I/O Read + 13% Buffered I/O Write + 22% Buffered I/O Read + 4%
Attachments
Issue Links
- is related to
-
LU-12151 metadata performance difference on root and non-root user
-
- Resolved
-
Activity
"Jian Yu <yujian@whamcloud.com>" uploaded a new patch: https://review.whamcloud.com/46305
Subject: LU-13239 ldiskfs: pass inode timestamps at initial creation
Project: fs/lustre-release
Branch: b2_14
Current Patch Set: 1
Commit: 0e219853f5ae771f67145d03680a187e246736fd
Oleg Drokin (green@whamcloud.com) merged in patch https://review.whamcloud.com/37556/
Subject: LU-13239 ldiskfs: pass inode timestamps at initial creation
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 5bb641fa61175fd0fe63e830219d88304b5162c3
Overall we do get some improvement excepting buffered I/O re-writes. (41242.91 dropped to 41176.08).
It's worth an additional couple of runs to determine if this is a re-write regression or buffered I/O regression.
master (c54b6ca2bdb5fb350117138106ffe37cdb9b7046) and master w/patch:
srun -n 128 -N 32 -w c-lmo[079,081,084,086-105,107-109,116-121] IOR -vv -w -F -b 163840m -t 4m -i 5 -k -m -D 180 -B -o /mnt/testfs/v2/out.write srun -n 128 -N 32 -w c-lmo[079,081,084,086-105,107-109,116-121] IOR -vv -w -F -b 163840m -t 4m -i 5 -k -m -D 180 -o /mnt/testfs/v2/out.write srun -n 128 -N 32 -w c-lmo[079,081,084,086-105,107-109,116-121] IOR -vv -w -F -b 163840m -t 4m -i 1 -k -D 900 -o /mnt/testfs/v2/out.read srun -n 128 -N 32 -w c-lmo[079,081,084,086-105,107-109,116-121] IOR -vv -r -F -b 36658216960 -t 1m -i 5 -k -D 90 -B -o /mnt/testfs/v2/out.read srun -n 128 -N 32 -w c-lmo[079,081,084,086-105,107-109,116-121] IOR -vv -r -F -b 36658216960 -t 1m -i 5 -k -D 90 -o /mnt/testfs/v2/out.read master: Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 18863.82 18791.60 18825.77 27.30 29066.41 29061.92 29064.71 1.57 180.38643 128 4 5 1 0 1 0 0 1 171798691840 4194304 3562138501120 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 42220.02 40370.32 41242.91 654.85 29040.74 29036.04 29038.53 2.02 180.54908 128 4 5 1 0 1 0 0 1 171798691840 4194304 7893113896960 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 31888.98 31888.98 31888.98 0.00 7972.25 7972.25 7972.25 0.00 657.64159 128 4 1 1 0 1 0 0 1 171798691840 4194304 21990232555520 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- read 22783.78 21939.32 22392.50 345.07 49367.72 49347.83 49361.92 7.33 90.65450 128 4 5 1 0 1 0 0 1 36658216960 1048576 2165529640960 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- read 19408.10 18966.88 19251.06 149.11 49210.09 48625.45 48905.19 208.40 91.50280 128 4 5 1 0 1 0 0 1 36658216960 1048576 1848417189888 -1 POSIX EXCEL this patch Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 18891.51 18776.35 18838.36 37.66 29052.86 29048.29 29050.24 1.66 180.47632 128 4 5 1 0 1 0 0 1 171798691840 4194304 3563333877760 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 41928.60 40658.43 41176.08 489.07 29054.88 28977.41 29029.34 28.26 180.60641 128 4 5 1 0 1 0 0 1 171798691840 4194304 7937217003520 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- write 32415.68 32415.68 32415.68 0.00 8103.92 8103.92 8103.92 0.00 646.95600 128 4 1 1 0 1 0 0 1 171798691840 4194304 21990232555520 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- read 23532.90 22701.12 23178.54 277.01 49439.72 49422.36 49432.09 7.06 90.52581 128 4 5 1 0 1 0 0 1 36658216960 1048576 2197518548992 -1 POSIX EXCEL Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s) Op grep #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize --------- --------- --------- ---------- ------- --------- --------- ---------- ------- -------- read 20218.87 19560.56 19881.25 213.16 49472.26 48715.86 48946.76 270.99 91.42621 128 4 5 1 0 1 0 0 1 36658216960 1048576 1907682705408 -1 POSIX EXCEL
We need to do a better comparison run of current master with and w/o this patch to better gauge where things stand.
I think these patches are interesting, but the performance numbers confuse me. I'd think that this would improve the performance of mdtest, because it avoids extra operations on each file create, but I can't imagine how the file create performance would affect IOR performance (which does the create before the timing starts).
Shaun Tancheff (shaun.tancheff@hpe.com) uploaded a new patch: https://review.whamcloud.com/37557
Subject: LU-13239 ldiskfs: pass i_xtime down optimization (rhel 8.1)
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 3edcb2b8f9c949c2228e320531c18511c97fc1f6
"Oleg Drokin <green@whamcloud.com>" merged in patch https://review.whamcloud.com/46305/
Subject:
LU-13239ldiskfs: pass inode timestamps at initial creationProject: fs/lustre-release
Branch: b2_14
Current Patch Set:
Commit: b9ed982a57b3833eb5abe7bc36d489da6ad1b2c2