Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-18528

sanity-quota: test_68 fails with error 'Slave number 1 for qpool1 != 2'

Details

    • Bug
    • Resolution: Fixed
    • Major
    • Lustre 2.17.0
    • Lustre 2.17.0
    • 3
    • 9223372036854775807

    Description

      This issue was created by maloo for Serguei Smirnov <ssmirnov@ddn.com>

      This issue relates to the following test suite run: https://testing.whamcloud.com/test_sets/9bede733-7955-4bb4-9d2a-658b1d809f4f

      Test session details:
      clients: https://build.whamcloud.com/job/lustre-reviews/109529 - 4.18.0-553.27.1.el8_10.x86_64
      servers: https://build.whamcloud.com/job/lustre-reviews/109529 - 4.18.0-553.27.1.el8_lustre.x86_64

      CMD: trevis-98vm7 /usr/sbin/lctl get_param -n qmt.lustre-QMT0000.dt-qpool1.info
      Adding targets to pool
      CMD: trevis-98vm7 /usr/sbin/lctl pool_add lustre.qpool1 lustre-OST[0001-0001/1]
      trevis-98vm7: OST lustre-OST0001_UUID added to pool lustre.qpool1
      CMD: trevis-37vm4.trevis.whamcloud.com lctl get_param n lov.lustre*.pools.qpool1 |
      grep -e lustre-OST0001_UUID | sort -u | tr '\n' ' '
      Waiting 90s for 'lustre-OST0001_UUID '
      CMD: trevis-37vm4.trevis.whamcloud.com lctl get_param n lov.lustre*.pools.qpool1 |
      grep -e lustre-OST0001_UUID | sort -u | tr '\n' ' '
      CMD: trevis-98vm7 /usr/sbin/lctl get_param -n qmt.lustre-QMT0000.dt-qpool1.info
      Adding targets to pool
      CMD: trevis-98vm7 /usr/sbin/lctl pool_add lustre.qpool1 lustre-OST[0000-0001/1]
      trevis-98vm7: pool_add: lustre-OST0001_UUID is already in pool lustre.qpool1
      pdsh@trevis-37vm4: trevis-98vm7: ssh exited with exit code 17
      CMD: trevis-37vm4.trevis.whamcloud.com lctl get_param n lov.lustre*.pools.qpool1 |
      grep -e lustre-OST0000_UUID -e lustre-OST0001_UUID | sort -u | tr '\n' ' '
      Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID '
      CMD: trevis-37vm4.trevis.whamcloud.com lctl get_param n lov.lustre*.pools.qpool1 |
      grep -e lustre-OST0000_UUID -e lustre-OST0001_UUID | sort -u | tr '\n' ' '
      CMD: trevis-98vm7 /usr/sbin/lctl get_param -n qmt.lustre-QMT0000.dt-qpool1.info
      sanity-quota test_68: @@@@@@ FAIL: Slave number 1 for qpool1 != 2
      Trace dump:
      = /usr/lib64/lustre/tests/test-framework.sh:7229:error()
      = /usr/lib64/lustre/tests/sanity-quota.sh:5239:test_68()
      = /usr/lib64/lustre/tests/test-framework.sh:7602:run_one()
      = /usr/lib64/lustre/tests/test-framework.sh:7665:run_one_logged()
      = /usr/lib64/lustre/tests/test-framework.sh:7483:run_test()
      = /usr/lib64/lustre/tests/sanity-quota.sh:5256:main()
      Dumping lctl log to /autotest/autotest-2/2024-12-10/lustre-reviews_review-zfs_109529_2_b842700d-e466-4da3-9a24-c3fb25a6a3dc//sanity-quota.test_68.*.1733824711.log
      CMD: trevis-37vm4.trevis.whamcloud.com,trevis-37vm5,trevis-56vm2,trevis-98vm7 /usr/sbin/lctl dk > /autotest/autotest-2/2024-12-10/lustre-reviews_review-zfs_109529_2_b842700d-e466-4da3-9a24-c3fb25a6a3dc//sanity-quota.test_68.debug_log.\$(hostname -s).1733824711.log;
      dmesg > /autotest/autotest-2/2024-12-10/lustre-reviews_review-zfs_109529_2_b842700d-e466-4da3-9a24-c3fb25a6a3dc//sanity-quota.test_68.dmesg.\$(hostname -s).1733824711.log
      Destroy the created pools: qpool1
      CMD: trevis-98vm7 /usr/sbin/lctl pool_list lustre
      lustre.qpool1
      CMD: trevis-98vm7 /usr/sbin/lctl pool_list lustre.qpool1
      CMD: trevis-98vm7 lctl pool_remove lustre.qpool1 lustre-OST0001_UUID
      trevis-98vm7: OST lustre-OST0001_UUID removed from pool lustre.qpool1
      CMD: trevis-98vm7 lctl pool_remove lustre.qpool1 lustre-OST0000_UUID
      trevis-98vm7: OST lustre-OST0000_UUID removed from pool lustre.qpool1
      CMD: trevis-98vm7 lctl pool_list lustre.qpool1 | wc -l
      CMD: trevis-98vm7 lctl pool_destroy lustre.qpool1
      trevis-98vm7: Pool lustre.qpool1 destroyed
      CMD: trevis-37vm4.trevis.whamcloud.com lctl get_param n lov.lustre*.pools.qpool1 2>/dev/null || echo foo
      Delete files...
      Wait for unlink objects finished...
      sleep 5 for ZFS MDS
      Waiting for MDT destroys to complete
      CMD: trevis-98vm7 /usr/sbin/lctl get_param -n osp.*.destroys_in_flight
      CMD: trevis-98vm7 lctl set_param -n os[cd]*.MDT.force_sync=1
      CMD: trevis-56vm2 lctl set_param -n osd*.OS.force_sync=1

      Attachments

        Issue Links

          Activity

            People

              hongchao.zhang Hongchao Zhang
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: