Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-6370

Read performance degrades with increasing read block size.

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.8.0
    • Lustre 2.6.0, Lustre 2.7.0, Lustre 2.8.0
    • Clients running a Cray 2.5 which contains backport of CLIO changes for 2.6. Problem also observed with vanilla 2.6 and 2.7 clients,
    • 3
    • 9223372036854775807

    Description

      We're finding substantial read performance degradations with increasing read block sizes. This has been observed in eslogin nodes as well as in internal login nodes. Data provided below was gathered on an external node.

      ext7:/lustre # dd if=/dev/urandom of=3gurandomdata bs=4M count=$((256*3))
      ext7:/lustre # for i in 4K 1M 4M 16M 32M 64M 128M 256M 512M 1G 2G 3G ; do echo -en "$i\t" ; dd if=3gurandomdata bs=${i} of=/dev/null 2>&1 | egrep copied ; done

      4K 3221225472 bytes (3.2 GB) copied, 13.9569 s, 231 MB/s
      1M 3221225472 bytes (3.2 GB) copied, 4.94163 s, 652 MB/s
      4M 3221225472 bytes (3.2 GB) copied, 6.24378 s, 516 MB/s
      16M 3221225472 bytes (3.2 GB) copied, 5.24595 s, 614 MB/s
      32M 3221225472 bytes (3.2 GB) copied, 5.48208 s, 588 MB/s
      64M 3221225472 bytes (3.2 GB) copied, 5.36964 s, 600 MB/s
      128M 3221225472 bytes (3.2 GB) copied, 5.12867 s, 628 MB/s
      256M 3221225472 bytes (3.2 GB) copied, 5.1467 s, 626 MB/s
      512M 3221225472 bytes (3.2 GB) copied, 5.31232 s, 606 MB/s
      1G 3221225472 bytes (3.2 GB) copied, 12.4088 s, 260 MB/s
      2G 3221225472 bytes (3.2 GB) copied, 339.646 s, 9.5 MB/s
      3G 3221225472 bytes (3.2 GB) copied, 350.071 s, 9.2 MB/s

      This shows up on 1008 striped file system but on smaller systems the impact is not nearly so substantial. On our 56 OST system we get
      3G 3221225472 bytes (3.2 GB) copied, 4.77246 s, 675 MB/s

      Another test case was used with C code rather than dd that provided similar results based on an fread call

      int read_size = 256*1024*1024*2;
      fread(buffer, sizeof(float), read_size, fp_in);

      Also, file striping information on production and tds filesystems:
      ext8:/lustre # lfs getstripe 3gurandomdata
      3gurandomdata
      lmm_stripe_count: 4
      lmm_stripe_size: 1048576
      lmm_pattern: 1
      lmm_layout_gen: 0
      lmm_stripe_offset: 833
      obdidx objid objid group
      833 5978755 0x5b3a83 0
      834 5953949 0x5ad99d 0
      835 5958818 0x5aeca2 0
      836 5966400 0x5b0a40 0

      ext8:/lustretds # lfs getstripe 3gurandomdata
      3gurandomdata
      lmm_stripe_count: 4
      lmm_stripe_size: 1048576
      lmm_pattern: 1
      lmm_layout_gen: 0
      lmm_stripe_offset: 51
      obdidx objid objid group
      51 1451231 0x1624df 0
      52 1452258 0x1628e2 0
      53 1450278 0x162126 0
      54 1444772 0x160ba4 0

      So this appears to only be happening on wide-stripe file systems. Here's the output from 'perf top' while a 'bad' dd is running:

      • 8.74% [kernel] [k] _spin_lock - _spin_lock
      • 22.23% osc_ap_completion
        osc_extent_finish
        brw_interpret
        ptlrpc_check_set
        ptlrpcd_check
        ptlrpcd
        kthread
        child_rip
        + 13.76% cl_env_put
        + 12.37% cl_env_get
        + 7.10% vvp_write_complete
        + 6.51% kfree
        + 4.62% osc_teardown_async_page
        + 3.96% osc_page_delete
        + 3.89% osc_lru_add_batch
        + 2.69% kmem_cache_free
        + 2.23% osc_page_init
        + 1.71% sptlrpc_import_sec_ref
        + 1.64% osc_page_transfer_add
        + 1.57% osc_io_submit
        + 1.43% cfs_percpt_lock

      Attachments

        1. LU6370_1GB_BS.lctldk.out.gz
          0.2 kB
        2. LU6370_2GB_BS.lctldk.out.gz
          0.2 kB
        3. LU6370_cpu_log_20150402.out.gz
          20 kB
        4. LU6370_max_cached_mb_20150402.out.gz
          521 kB
        5. lu-6370_perf_data.tgz
          5 kB
        6. lu-6370-perf.tgz
          4.03 MB

        Issue Links

          Activity

            People

              jay Jinshan Xiong (Inactive)
              simmonsja James A Simmons
              Votes:
              0 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: