Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-2476

poor OST file creation rate performance with zfs backend

Details

    • Improvement
    • Resolution: Won't Fix
    • Blocker
    • None
    • Lustre 2.4.0
    • RHEL 6.2
    • 2828

    Description

      We observe poor file creation rate performance with a zfs backend. The attached jpeg was generated using the createmany benchmark creating 1 million files in a single directory. Files created per second were reported for every 10,000 files created. The green line shows results for Lustre 2.1 and ldiskfs backend. The red line was with ZFS on Orion.

      The benchmark was run on the same hardware for both ldisks and ZFS:

      MDT: 24 GB RAM, 8 200GB SSD drives in two external SAS-2 enclosures
      (Linux MD-RAID10 for ldiskfs, 1 zpool with 8 mirrored pairs for zfs)
      OSS: 2 OSS nodes (3 8TB* OSTs each for ldiskfs, 1 72TB OST each for ZFS)
      OST: Netapp 60 drive enclosure, 6 24 TB RAID6 LUNs, 3 TB SAS drives, dual 4X QDR IB connections to each OST
      Network: 20 Gb/sec 4X DDR IB

      *LUNs were partitioned for ldiskfs for compatibility reasons

      Attachments

        1. aplot.eps
          310 kB
        2. bplot.eps
          466 kB
        3. clients.png
          clients.png
          11 kB
        4. creates.jpg
          creates.jpg
          64 kB
        5. dmu-api-patch-createmany-results.ps
          23 kB
        6. llog-mds-zwicky-1-slow
          89 kB
        7. mdstats.png
          mdstats.png
          7 kB
        8. mdtest-arcdata.png
          mdtest-arcdata.png
          29 kB
        9. ori434-run1.tar.bz2
          62 kB
        10. osp_object_create.png
          osp_object_create.png
          128 kB
        11. zfs.jpg
          zfs.jpg
          45 kB

        Issue Links

          Activity

            [LU-2476] poor OST file creation rate performance with zfs backend

            Old blocker for unsupported version

            simmonsja James A Simmons added a comment - Old blocker for unsupported version

            We have upgraded to zfs-0.6.3-1 and while it looks promising that this issue is solved (or greatly improved) I haven't been able to verify this. I managed to run about 20 Million file writes, and need more than 30M to go past the point where things degraded last time.

            I'd like to verify it's improved, but can't do it with that system now.

            While testing we did unfortunately see decreased numbers for stat, see LU-5212

            Scott

            sknolin Scott Nolin (Inactive) added a comment - We have upgraded to zfs-0.6.3-1 and while it looks promising that this issue is solved (or greatly improved) I haven't been able to verify this. I managed to run about 20 Million file writes, and need more than 30M to go past the point where things degraded last time. I'd like to verify it's improved, but can't do it with that system now. While testing we did unfortunately see decreased numbers for stat, see LU-5212 Scott
            sknolin Scott Nolin (Inactive) added a comment - - edited

            For the record, here is a graph of performance degradation. - mdtest-arcdata.png

            After upgrading ram (and to ssd's) we now see it after about 30 million creates.

            This is on lustre 2.4.0-1 with zfs 0.6.2, no additional patches

            We have 258 G RAM - zfs arc options were doubled from default:

            options zfs zfs_arc_meta_limit=10000000000
            options zfs zfs_arc_max=150000000000

            I know the new version of zfs increases this further by default. Ideally we'd be able to run this test with the new version too.

            Interestingly, now I see all other mdtest file operations go bad at about the same time. Previously, with fewer resources and default zfs arc options the file creates tanked very quickly, but the other operations stayed the same. In that case though we never managed to test past maybe a couple million files, as it was so slow.

            sknolin Scott Nolin (Inactive) added a comment - - edited For the record, here is a graph of performance degradation. - mdtest-arcdata.png After upgrading ram (and to ssd's) we now see it after about 30 million creates. This is on lustre 2.4.0-1 with zfs 0.6.2, no additional patches We have 258 G RAM - zfs arc options were doubled from default: options zfs zfs_arc_meta_limit=10000000000 options zfs zfs_arc_max=150000000000 I know the new version of zfs increases this further by default. Ideally we'd be able to run this test with the new version too. Interestingly, now I see all other mdtest file operations go bad at about the same time. Previously, with fewer resources and default zfs arc options the file creates tanked very quickly, but the other operations stayed the same. In that case though we never managed to test past maybe a couple million files, as it was so slow.

            Just an update on changing the arc parameters. It did take a while to happen, but now we have seen some negative effects. No OOM, but lustre threads taking a long time to complete, indicating overloaded system. So I assume not enough memory for the number of service threads.

            Scott

            sknolin Scott Nolin (Inactive) added a comment - Just an update on changing the arc parameters. It did take a while to happen, but now we have seen some negative effects. No OOM, but lustre threads taking a long time to complete, indicating overloaded system. So I assume not enough memory for the number of service threads. Scott

            FYI, the ARC changes landed last Friday to the ZoL master tree. So if you get a chance to update and run with those patches, I'd be interested to know if you still have to "reset" things due to performance degradation.

            prakash Prakash Surya (Inactive) added a comment - FYI, the ARC changes landed last Friday to the ZoL master tree. So if you get a chance to update and run with those patches, I'd be interested to know if you still have to "reset" things due to performance degradation.
            sknolin Scott Nolin (Inactive) added a comment - - edited

            Here's some additional information about our current workload, in case it's pertinent.

            I think with every file written the mtime is changed and permissions are changed by the user's process. In aggregate according to robinhood there are an average of 1 mtime change and about 1.5 permission change operations per file.

            Earlier we ran into the issue with synchronous permission changes, because so many are performed - https://jira.hpdd.intel.com/browse/LU-3671 - making them asynchronous helped significantly.

            Scott

            sknolin Scott Nolin (Inactive) added a comment - - edited Here's some additional information about our current workload, in case it's pertinent. I think with every file written the mtime is changed and permissions are changed by the user's process. In aggregate according to robinhood there are an average of 1 mtime change and about 1.5 permission change operations per file. Earlier we ran into the issue with synchronous permission changes, because so many are performed - https://jira.hpdd.intel.com/browse/LU-3671 - making them asynchronous helped significantly. Scott

            We were just guessing if we increased the arc size it would delay the performance degradation. We haven't had any OOMs, we have 128GB of RAM.

            I don't know if it's really helped, it hasn't hurt. My guess is maybe pointless or a minor help.

            Scott

            sknolin Scott Nolin (Inactive) added a comment - We were just guessing if we increased the arc size it would delay the performance degradation. We haven't had any OOMs, we have 128GB of RAM. I don't know if it's really helped, it hasn't hurt. My guess is maybe pointless or a minor help. Scott

            Yea, those numbers look "normal" to me. I'd expect the ARC to be performing reasonably well at that point in time.

            I see you're manually tuning a few of the ARC parameters (i.e. arc_meta_limit, c_max, c_min, maybe others?), care to fill me in on why that is? I have a feeling your tuning of arc_meta_limit is helping you, but I'm curious what drove you to do that in the first place and what (if any) perceived benefits you're seeing. Have you been hitting any OOMs with a higher than normal arc_meta_limit? How much RAM is on the system?

            prakash Prakash Surya (Inactive) added a comment - Yea, those numbers look "normal" to me. I'd expect the ARC to be performing reasonably well at that point in time. I see you're manually tuning a few of the ARC parameters (i.e. arc_meta_limit, c_max, c_min, maybe others?), care to fill me in on why that is? I have a feeling your tuning of arc_meta_limit is helping you, but I'm curious what drove you to do that in the first place and what (if any) perceived benefits you're seeing. Have you been hitting any OOMs with a higher than normal arc_meta_limit? How much RAM is on the system?
            sknolin Scott Nolin (Inactive) added a comment - - edited

            Prakesh,

            Our user asked us to reset things, so captured the arcstats info and beat down the cache size with that hack.

            However, the system wasn't really in an extremely degraded state. The arc was full but I think performance wasn't that bad - lower than his peak, but maybe 50%. Since he's the only user I went ahead and did it anyhow. So I don't know how useful these numbers are for you:

            c                               4    59624418824
            c_min                           4    33000000000
            c_max                           4    96000000000
            hdr_size                        4    10851742848
            data_size                       4    25850144256
            other_size                      4    22922398496
            anon_size                       4    409600
            mru_size                        4    7485866496
            mru_ghost_size                  4    33932813824
            mfu_size                        4    18363868160
            mfu_ghost_size                  4    1910375424
            l2_size                         4    0
            l2_hdr_size                     4    0
            duplicate_buffers_size          4    0
            arc_meta_used                   4    56346426784
            arc_meta_limit                  4    66000000000
            

            Reading your previous posts, I think we did see the cache warm-up effect after our restart last time. Performance wasn't great and then got much better as data flowed in over time, then got worse.

            Scott

            sknolin Scott Nolin (Inactive) added a comment - - edited Prakesh, Our user asked us to reset things, so captured the arcstats info and beat down the cache size with that hack. However, the system wasn't really in an extremely degraded state. The arc was full but I think performance wasn't that bad - lower than his peak, but maybe 50%. Since he's the only user I went ahead and did it anyhow. So I don't know how useful these numbers are for you: c 4 59624418824 c_min 4 33000000000 c_max 4 96000000000 hdr_size 4 10851742848 data_size 4 25850144256 other_size 4 22922398496 anon_size 4 409600 mru_size 4 7485866496 mru_ghost_size 4 33932813824 mfu_size 4 18363868160 mfu_ghost_size 4 1910375424 l2_size 4 0 l2_hdr_size 4 0 duplicate_buffers_size 4 0 arc_meta_used 4 56346426784 arc_meta_limit 4 66000000000 Reading your previous posts, I think we did see the cache warm-up effect after our restart last time. Performance wasn't great and then got much better as data flowed in over time, then got worse. Scott

            Alex, Does this get you what you want to see?

            # zeno1 /root > cat /proc/spl/kstat/zfs/zeno1/reads
            8 0 0x01 30 3360 242325718628 766164277255963
            UID      start            objset   object   level    blkid    aflags   origin                   pid      process         
            15928    766164156579412  0x2f     178      0        79114    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15929    766164159823356  0x2f     179      0        84980    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15930    766164162520769  0x2f     150      0        58855    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15931    766164163555034  0x2f     169      0        48748    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15932    766164174364152  0x2f     151      0        83592    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15933    766164177093706  0x2f     148      0        416      0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15934    766164184477506  0x2f     152      0        6507     0x64     dbuf_read_impl           3803     ll_ost03_000    
            15935    766164187822369  0x2f     153      0        24148    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15936    766164188694642  0x2f     167      0        13795    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15937    766164197669859  0x2f     154      0        36067    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15938    766164199348418  0x2f     162      0        84625    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15939    766164210502738  0x2f     156      0        63494    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15940    766164210929591  0x2f     163      0        57995    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15941    766164222820463  0x2f     154      0        10070    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15942    766164222864079  0x2f     157      0        53684    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15943    766164225929360  0x2f     160      0        34939    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15944    766164228828963  0x2f     161      0        5729     0x64     dbuf_read_impl           3803     ll_ost03_000    
            15945    766164231472223  0x2f     162      0        11351    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15946    766164234109879  0x2f     170      0        59220    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15947    766164237596524  0x2f     163      0        94437    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15948    766164240488786  0x2f     164      0        57877    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15949    766164240829980  0x2f     165      0        87378    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15950    766164243497409  0x2f     166      0        42756    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15951    766164246480267  0x2f     169      0        48773    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15952    766164249795764  0x2f     167      0        36727    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15953    766164253590137  0x2f     163      0        66820    0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15954    766164261417886  0x2f     168      0        53317    0x64     dbuf_read_impl           3803     ll_ost03_000    
            15955    766164265303986  0x2f     160      0        8178     0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15956    766164272799763  0x2f     171      0        1701     0x64     dbuf_read_impl           6828     ldlm_cn03_007   
            15957    766164273028298  0x2f     169      0        36867    0x64     dbuf_read_impl           3803     ll_ost03_000
            
            prakash Prakash Surya (Inactive) added a comment - Alex, Does this get you what you want to see? # zeno1 /root > cat /proc/spl/kstat/zfs/zeno1/reads 8 0 0x01 30 3360 242325718628 766164277255963 UID start objset object level blkid aflags origin pid process 15928 766164156579412 0x2f 178 0 79114 0x64 dbuf_read_impl 3803 ll_ost03_000 15929 766164159823356 0x2f 179 0 84980 0x64 dbuf_read_impl 3803 ll_ost03_000 15930 766164162520769 0x2f 150 0 58855 0x64 dbuf_read_impl 3803 ll_ost03_000 15931 766164163555034 0x2f 169 0 48748 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15932 766164174364152 0x2f 151 0 83592 0x64 dbuf_read_impl 3803 ll_ost03_000 15933 766164177093706 0x2f 148 0 416 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15934 766164184477506 0x2f 152 0 6507 0x64 dbuf_read_impl 3803 ll_ost03_000 15935 766164187822369 0x2f 153 0 24148 0x64 dbuf_read_impl 3803 ll_ost03_000 15936 766164188694642 0x2f 167 0 13795 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15937 766164197669859 0x2f 154 0 36067 0x64 dbuf_read_impl 3803 ll_ost03_000 15938 766164199348418 0x2f 162 0 84625 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15939 766164210502738 0x2f 156 0 63494 0x64 dbuf_read_impl 3803 ll_ost03_000 15940 766164210929591 0x2f 163 0 57995 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15941 766164222820463 0x2f 154 0 10070 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15942 766164222864079 0x2f 157 0 53684 0x64 dbuf_read_impl 3803 ll_ost03_000 15943 766164225929360 0x2f 160 0 34939 0x64 dbuf_read_impl 3803 ll_ost03_000 15944 766164228828963 0x2f 161 0 5729 0x64 dbuf_read_impl 3803 ll_ost03_000 15945 766164231472223 0x2f 162 0 11351 0x64 dbuf_read_impl 3803 ll_ost03_000 15946 766164234109879 0x2f 170 0 59220 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15947 766164237596524 0x2f 163 0 94437 0x64 dbuf_read_impl 3803 ll_ost03_000 15948 766164240488786 0x2f 164 0 57877 0x64 dbuf_read_impl 3803 ll_ost03_000 15949 766164240829980 0x2f 165 0 87378 0x64 dbuf_read_impl 3803 ll_ost03_000 15950 766164243497409 0x2f 166 0 42756 0x64 dbuf_read_impl 3803 ll_ost03_000 15951 766164246480267 0x2f 169 0 48773 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15952 766164249795764 0x2f 167 0 36727 0x64 dbuf_read_impl 3803 ll_ost03_000 15953 766164253590137 0x2f 163 0 66820 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15954 766164261417886 0x2f 168 0 53317 0x64 dbuf_read_impl 3803 ll_ost03_000 15955 766164265303986 0x2f 160 0 8178 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15956 766164272799763 0x2f 171 0 1701 0x64 dbuf_read_impl 6828 ldlm_cn03_007 15957 766164273028298 0x2f 169 0 36867 0x64 dbuf_read_impl 3803 ll_ost03_000

            People

              bzzz Alex Zhuravlev
              nedbass Ned Bass (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              19 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: