Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-9820

sanity-scrub test_9 fails with "(9) Expected 'scanning' on mds*"

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Minor
    • Lustre 2.15.0
    • Lustre 2.11.0, Lustre 2.10.3, Lustre 2.10.6, Lustre 2.10.7
    • None
    • 3
    • 9223372036854775807

    Description

      sanity-scrub test 9 fails with

      sanity-scrub test_9: @@@@@@ FAIL: (9) Expected 'scanning' on mds3 
      

      From the test_log, the last thing we see is:

      CMD: trevis-8vm4 /usr/sbin/lctl lfsck_start -M lustre-MDT0000 -t scrub 8 -s 100 -r
      Started LFSCK on the device lustre-MDT0000: scrub
      CMD: trevis-8vm8 /usr/sbin/lctl lfsck_start -M lustre-MDT0001 -t scrub 8 -s 100 -r
      Started LFSCK on the device lustre-MDT0001: scrub
      CMD: trevis-8vm4 /usr/sbin/lctl lfsck_start -M lustre-MDT0002 -t scrub 8 -s 100 -r
      Started LFSCK on the device lustre-MDT0002: scrub
      CMD: trevis-8vm8 /usr/sbin/lctl lfsck_start -M lustre-MDT0003 -t scrub 8 -s 100 -r
      Started LFSCK on the device lustre-MDT0003: scrub
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0000.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0000.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm8 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0001.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm8 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0001.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      Waiting 6 secs for update
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      CMD: trevis-8vm4 /usr/sbin/lctl get_param -n 			osd-ldiskfs.lustre-MDT0002.oi_scrub |
      			awk '/^status/ { print \$2 }'
      Update not seen after 6s: wanted 'scanning' got 'completed'
       sanity-scrub test_9: @@@@@@ FAIL: (9) Expected 'scanning' on mds3 
      

      There are no obvious errors in the console and debug logs.

      So far, this looks like it only impacts DNE Lustre configurations (reivew-dne-part-*)
      This test started to fail on June 2, 2017.

      Logs for this test failures are at:
      https://testing.hpdd.intel.com/test_sets/f934f42a-772a-11e7-8e04-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/6112f68a-76f1-11e7-8db2-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/57e14740-7626-11e7-9a53-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/f429dfd0-733e-11e7-8d7d-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/cd0041d8-6eac-11e7-a055-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/200eed24-6d32-11e7-a052-5254006e85c2
      https://testing.hpdd.intel.com/test_sets/76ef4bb6-6841-11e7-a74b-5254006e85c2

      Attachments

        Issue Links

          Activity

            People

              laisiyao Lai Siyao
              jamesanunez James Nunez (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: