Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-12406

conf-sanity test 111 fails with ‘add mds1 failed with new params’

    XMLWordPrintable

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Minor
    • None
    • Lustre 2.13.0
    • None
    • 3
    • 9223372036854775807

    Description

      conf-sanity test_111 fails with ‘add mds1 failed with new params’. Looking at the client test_log, we see that the problem is with the --mkfsoptions flag in the mkfs command

      CMD: trevis-18vm11 mkfs.lustre --mgs --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype=ldiskfs --device-size=2400000 --mkfsoptions=\"-O large_dir -i 1048576 -O ea_inode -E lazy_itable_init\" --reformat /dev/mapper/mds1_flakey
      trevis-18vm11: mkfs.lustre: don't specify multiple -O options
      trevis-18vm11: 
      trevis-18vm11: mkfs.lustre FATAL: mkfs failed 22
      trevis-18vm11: mkfs.lustre: exiting with 22 (Invalid argument)
      
         Permanent disk data:
      Target:     lustre:MDT0000
      Index:      0
      Lustre FS:  lustre
      Mount type: ldiskfs
      Flags:      0x65
                    (MDT MGS first_time update )
      Persistent mount opts: user_xattr,errors=remount-ro
      Parameters: sys.timeout=20 mdt.identity_upcall=/usr/sbin/l_getidentity
      
      device size = 2048MB
       conf-sanity test_111: @@@@@@ FAIL: add mds1 failed with new params
      

      It looks like Lustre does not like having multiple ‘-O’ flags in the --mkfsoptions. This may be an issue with the test and forming the --mkfsoptions list.

      Looking in the client test_log, we also see that test_115 suffers from this same issue, but the test is skipped due to this issue

      == conf-sanity test 115: Access large xattr with inodes number over 2TB ============================== 02:17:13 (1559787433)
      Stopping clients: trevis-18vm4.trevis.whamcloud.com,trevis-18vm5 /mnt/lustre (opts:)
      CMD: trevis-18vm4.trevis.whamcloud.com,trevis-18vm5 running=\$(grep -c /mnt/lustre' ' /proc/mounts);
      if [ \$running -ne 0 ] ; then
      echo Stopping client \$(hostname) /mnt/lustre opts:;
      …
      CMD: trevis-18vm11 mkfs.lustre --mgsnode=trevis-18vm11@tcp --fsname=lustre --mdt --index=0 --param=sys.timeout=20 --param=mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype=ldiskfs --device-size=200000 --mkfsoptions=\"-O ea_inode -E lazy_itable_init\" --device-size=3298534883328 --mkfsoptions='-O lazy_itable_init,ea_inode,^resize_inode,meta_bg -i 1024' --mgs --reformat /dev/loop0
      trevis-18vm11: 
      trevis-18vm11: mkfs.lustre FATAL: Unable to build fs /dev/loop0 (256)
      trevis-18vm11: 
      trevis-18vm11: mkfs.lustre FATAL: mkfs failed 256
      
         Permanent disk data:
      Target:     lustre:MDT0000
      Index:      0
      Lustre FS:  lustre
      Mount type: ldiskfs
      Flags:      0x65
                    (MDT MGS first_time update )
      Persistent mount opts: user_xattr,errors=remount-ro
      Parameters: mgsnode=10.9.4.221@tcp sys.timeout=20 mdt.identity_upcall=/usr/sbin/l_getidentity
      
      device size = 3145728MB
      formatting backing filesystem ldiskfs on /dev/loop0
      	target name   lustre:MDT0000
      	kilobytes     3221225472
      	options        -i 1024 -J size=4096 -I 1024 -q -O lazy_itable_init,ea_inode,^resize_inode,meta_bg,dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F
      mkfs_cmd = mke2fs -j -b 4096 -L lustre:MDT0000  -i 1024 -J size=4096 -I 1024 -q -O lazy_itable_init,ea_inode,^resize_inode,meta_bg,dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F /dev/loop0 3221225472k
         Invalid filesystem option set: lazy_itable_init,ea_inode,^resize_inode,meta_bg,dirdata,uninit_bg,^extents,dir_nlink,quota,huge_   file,flex_bg
      
       SKIP: conf-sanity test_115 format large MDT failed
      

      It looks like this started failing on 2019-06-03 with Lustre 2.12.53.104. Logs for failed test sessions with this test failure are at
      https://testing.whamcloud.com/test_sets/6b65b0f4-8695-11e9-869c-52540065bddc
      https://testing.whamcloud.com/test_sets/34c0a1a0-8698-11e9-af1f-52540065bddc
      https://testing.whamcloud.com/test_sets/efc1357e-8895-11e9-8c65-52540065bddc

      Attachments

        Issue Links

          Activity

            People

              wc-triage WC Triage
              jamesanunez James Nunez (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: