Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-18682

ost-pools/24 fails due to lack of space

Details

    • Bug
    • Resolution: Unresolved
    • Minor
    • None
    • None
    • None
    • 3
    • 9223372036854775807

    Description

      == ost-pools test 24: Independence of pool from other setstripe parameters ========================================================== 18:39:03 (1738089543)
      Pool lustre.testpool created
      OST lustre-OST0000_UUID added to pool lustre.testpool
      OST lustre-OST0001_UUID added to pool lustre.testpool
      OST lustre-OST0002_UUID added to pool lustre.testpool
      OST lustre-OST0003_UUID added to pool lustre.testpool
      total: 10 open/close in 0.03 seconds: 383.48 ops/second
      
       ost-pools test_24: @@@@@@ FAIL: Stripe count 4 not on /mnt/lustre/d24.ost-pools/dir1/f24.ost-pools0:3 
        Trace dump:
        = ./../tests/test-framework.sh:7225:error()
        = ost-pools.sh:1473:test_24()
        = ./../tests/test-framework.sh:7598:run_one()
        = ./../tests/test-framework.sh:7661:run_one_logged()
        = ./../tests/test-framework.sh:7479:run_test()
        = ost-pools.sh:1488:main()
      

      that's because the test 23b consume a lot of space:

      == ost-pools test 23b: OST pools and OOS ================= 18:38:43 (1738089523)
      running as uid/gid/euid/egid 500/500/500/500, groups: 500
       [true]
      running as uid/gid/euid/egid 500/500/500/500, groups: 500
       [touch] [/mnt/lustre/d0_runas_test/f1584]
      Pool lustre.testpool created
      OST lustre-OST0000_UUID added to pool lustre.testpool
      OST lustre-OST0003_UUID added to pool lustre.testpool
      OSTCOUNT=4, OSTSIZE=400000, AVAIL=566640
      MAXFREE=31457280, SLOW=no
      [1 iteration] dd: error writing '/mnt/lustre/d23b.ost-pools/dir/f23b.ost-pools-quota1': No space left on device
      548+0 records in
      547+0 records out
      573669376 bytes (574 MB, 547 MiB) copied, 5.72241 s, 100 MB/s
      total written: 5242880
      stime=1738089531, etime=1738089537, elapsed=6
      Filesystem                  Size  Used Avail Use% Mounted on
      /dev/root                   814M  814M     0 100% /
      devtmpfs                    3.0G     0  3.0G   0% /dev
      tmpfs                       3.0G     0  3.0G   0% /dev/shm
      tmpfs                       3.0G  344K  3.0G   1% /run
      tmpfs                       3.0G     0  3.0G   0% /sys/fs/cgroup
      none                        3.0G  1.1G  1.9G  37% /tmp
      /dev/vdb                     66M   66M     0 100% /mnt/build
      /dev/mapper/mds1_flakey     123M  3.0M  109M   3% /mnt/lustre-mds1
      /dev/mapper/mds2_flakey     123M  9.3M  102M   9% /mnt/lustre-mds2
      /dev/mapper/ost1_flakey     306M  279M  524K 100% /mnt/lustre-ost1
      /dev/mapper/ost2_flakey     306M  1.7M  278M   1% /mnt/lustre-ost2
      /dev/mapper/ost3_flakey     306M  1.7M  278M   1% /mnt/lustre-ost3
      /dev/mapper/ost4_flakey     306M  274M  5.7M  98% /mnt/lustre-ost4
      192.168.120.87@tcp:/lustre  1.2G  185M  748M  20% /mnt/lustre
      Destroy the created pools: testpool
      lustre.testpool
      OST lustre-OST0000_UUID removed from pool lustre.testpool
      OST lustre-OST0003_UUID removed from pool lustre.testpool
      Pool lustre.testpool destroyed
      PASS 23b (20s)
      

      and then it takes time to free that space and propagate uptodate stats info to MDS:

      00020000:01000000:1.0:1738089547.035188:0:43790:0:(lod_qos.c:114:lod_statfs_and_check()) lustre-OST0000-osc-MDT0001: turns inactive: rc=-28 enospc    610
      00000004:00080000:1.0:1738089547.035282:0:43790:0:(osp_object.c:1637:osp_create()) lustre-OST0003-osc-MDT0001: Wrote last used FID: [0x340000402:0x55f9:0x0], index 3: 0
      00000004:00080000:1.0:1738089547.035285:0:43790:0:(osp_object.c:1637:osp_create()) lustre-OST0001-osc-MDT0001: Wrote last used FID: [0x2c0000402:0x55f6:0x0], index 1: 0
      00000004:00080000:1.0:1738089547.035287:0:43790:0:(osp_object.c:1637:osp_create()) lustre-OST0002-osc-MDT0001: Wrote last used FID: [0x300000402:0x553b:0x0], index 2: 0
      

      Attachments

        Activity

          People

            wc-triage WC Triage
            bzzz Alex Zhuravlev
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: