Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-3764

sanity test_116a: stripe QOS didn't balance free space

    XMLWordPrintable

Details

    • 3
    • 9696

    Description

      This issue was created by maloo for girish <gshilamkar@ddn.com>

      This issue relates to the following test suite run: http://maloo.whamcloud.com/test_sets/d8c6d5b4-0537-11e3-925a-52540035b04c.

      The sub-test test_116a failed with the following error:

      == sanity test 116a: stripe QOS: free space balance ===================== 22:07:35 (1376456855)
      Free space priority error: get_param: /proc/

      Unknown macro: {fs,sys}

      /

      Unknown macro: {lnet,lustre}

      /lov/clilov/qos_prio_free: Found no match
      CMD: client-26vm7 lctl set_param -n osd*.MD.force_sync 1
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      CMD: client-26vm7 lctl get_param -n osc.MDT.sync_*
      Waiting for local destroys to complete
      OST kbytes available: 163812 172220 172220 163816 172240 161968 172000
      Min free space: OST 5: 161968
      Max free space: OST 4: 172240
      Filling 25% remaining space in OST5 with 40492Kb
      ....................CMD: client-26vm7 lctl get_param -n lov.*.qos_maxage
      Waiting for local destroys to complete
      OST kbytes available: 164036 172220 172000 164036 172240 112592 172224
      Min free space: OST 5: 112592
      Max free space: OST 4: 172240
      diff=59648=52% must be > 20% for QOS mode...ok
      writing a bunch of files to QOS-assigned OSTs
      ...........................................................................................................................................................................................................wrote 203 200k files
      CMD: client-26vm7 lctl get_param -n lov.*.qos_maxage
      Note: free space may not be updated, so measurements might be off
      Waiting for local destroys to complete
      OST kbytes available: 155036 164800 163420 158036 166440 113608 164624
      Min free space: OST 5: 113608
      Max free space: OST 4: 166440
      free space delta: orig 59648 final 52832
      Wrote -1016 to smaller OST 5
      Wrote 5800 to larger OST 4
      lustre-OST0005_UUID
      435 files created on smaller OST 5
      lustre-OST0004_UUID
      371 files created on larger OST 4
      Wrote -15% more files to larger OST 4
      sanity test_116a: @@@@@@ IGNORE (bzstripe QOS didn't balance free space):
      Trace dump:
      = /usr/lib64/lustre/tests/test-framework.sh:4202:error_noexit()
      = /usr/lib64/lustre/tests/test-framework.sh:4243:error_ignore()
      = /usr/lib64/lustre/tests/sanity.sh:6659:test_116a()
      = /usr/lib64/lustre/tests/test-framework.sh:4483:run_one()
      = /usr/lib64/lustre/tests/test-framework.sh:4516:run_one_logged()
      = /usr/lib64/lustre/tests/test-framework.sh:4371:run_test()
      = /usr/lib64/lustre/tests/sanity.sh:6663:main()
      Dumping lctl log to /logdir/test_logs/2013-08-13/lustre-reviews-el6-x86_64-review-2_4_1_17301_-70153027810520-204146/sanity.test_116a.*.1376456900.log
      CMD: client-26vm1,client-26vm2.lab.whamcloud.com,client-26vm7,client-26vm8 /usr/sbin/lctl dk > /logdir/test_logs/2013-08-13/lustre-reviews-el6-x86_64-review-2_4_1_17301_-70153027810520-204146/sanity.test_116a.debug_log.\$(hostname -s).1376456900.log;
      dmesg > /logdir/test_logs/2013-08-13/lustre-reviews-el6-x86_64-review-2_4_1_17301_-70153027810520-204146/sanity.test_116a.dmesg.\$(hostname -s).1376456900.log
      Resetting fail_loc on all nodes...CMD: client-26vm1,client-26vm2.lab.whamcloud.com,client-26vm7,client-26vm8 lctl set_param -n fail_loc=0 2>/dev/null || true
      done.
      CMD: client-26vm1,client-26vm7,client-26vm8 rc=\$([ -f /proc/sys/lnet/catastrophe ] &&
      echo \$(< /proc/sys/lnet/catastrophe) || echo 0);
      if [ \$rc -ne 0 ]; then echo \$(hostname): \$rc; fi
      exit \$rc

      Info required for matching: sanity 116a

      Attachments

        Issue Links

          Activity

            People

              jamesanunez James Nunez (Inactive)
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: