Uploaded image for project: 'Lustre'
  1. Lustre
  2. LU-1510

Test failure on test suite sanity-quota, subtest test_18c

    XMLWordPrintable

Details

    • Bug
    • Resolution: Duplicate
    • Minor
    • None
    • None
    • None
    • 3
    • 4117

    Description

      This issue was created by maloo for Li Wei <liwei@whamcloud.com>

      This issue relates to the following test suite run: https://maloo.whamcloud.com/test_sets/2738888a-b423-11e1-a2dd-52540035b04c.

      The sub-test test_18c failed with the following error:

      test failed to respond and timed out

      Info required for matching: sanity-quota 18c

      From the OSS console log:

      13:24:39:Lustre: 11099:0:(lustre_log.h:471:llog_group_set_export()) lustre-OST0004: export for group 0 is changed: 0xffff88004f5cb400 -> 0xffff88004ca79400
      13:24:39:Lustre: 11099:0:(lustre_log.h:471:llog_group_set_export()) Skipped 13 previous similar messages
      13:24:39:Lustre: 11099:0:(llog_net.c:162:llog_receptor_accept()) changing the import ffff88006d407800 - ffff88006c891000
      13:24:39:Lustre: 11099:0:(llog_net.c:162:llog_receptor_accept()) Skipped 13 previous similar messages
      13:24:39:------------[ cut here ]------------
      13:24:39:kernel BUG at mm/slab.c:2833!
      13:24:39:invalid opcode: 0000 [#1] SMP 
      13:24:39:last sysfs file: /sys/devices/system/cpu/possible
      13:24:39:CPU 0 
      13:24:41:Modules linked in: obdfilter(U) fsfilt_ldiskfs(U) ost(U) mgc(U) ldiskfs(U) lustre(U) lquota(U) lov(U) osc(U) mdc(U) fid(U) fld(U) ksocklnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) nfs fscache sha512_generic sha256_generic jbd2 nfsd lockd nfs_acl auth_rpcgss exportfs autofs4 sunrpc ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_addr ipv6 ib_sa ib_mad ib_core microcode virtio_balloon 8139too 8139cp mii i2c_piix4 i2c_core ext3 jbd mbcache virtio_blk pata_acpi ata_generic ata_piix virtio_pci virtio_ring virtio dm_mirror dm_region_hash dm_log dm_mod [last unloaded: ldiskfs]
      13:24:41:
      13:24:41:Pid: 16415, comm: qslave_recovd Not tainted 2.6.32-220.17.1.el6_lustre.x86_64 #1 Red Hat KVM
      13:24:41:RIP: 0010:[<ffffffff8115e7b3>]  [<ffffffff8115e7b3>] cache_grow+0x313/0x320
      13:24:41:RSP: 0018:ffff88007c901ca0  EFLAGS: 00010002
      13:24:41:RAX: ffff88007f822480 RBX: ffff88007f800040 RCX: 0000000000000000
      13:24:42:RDX: 0000000000000000 RSI: 0000000000041252 RDI: ffff88007f800040
      13:24:42:RBP: ffff88007c901d00 R08: 0000000000000246 R09: 0000000000000000
      13:24:42:R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000041252
      13:24:42:R13: ffff88007f822440 R14: 000000000000003c R15: 0000000000000000
      13:24:42:FS:  00007f3efa85d700(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
      13:24:42:CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      13:24:42:CR2: 00007f8099d47000 CR3: 00000000379f0000 CR4: 00000000000006f0
      13:24:42:DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      13:24:44:DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      13:24:44:Process qslave_recovd (pid: 16415, threadinfo ffff88007c900000, task ffff880026ac14c0)
      13:24:44:Stack:
      13:24:44: ffff88007e376200 0000000000000000 0000000000000100 0000000000000101
      13:24:44:<0> ffff88006c665800 0000000000000000 ffff88007c901d60 ffff88007f800040
      13:24:44:<0> ffff88007faa0400 ffff88007f822440 000000000000003c ffff88007f822460
      13:24:44:Call Trace:
      13:24:44: [<ffffffff8115e9c2>] cache_alloc_refill+0x202/0x240
      13:24:44: [<ffffffffa0db1bf0>] ? cfs_alloc+0x30/0x60 [libcfs]
      13:24:44: [<ffffffff8115f6e9>] __kmalloc+0x1a9/0x220
      13:24:44: [<ffffffffa0db1bf0>] cfs_alloc+0x30/0x60 [libcfs]
      13:24:44: [<ffffffffa02e64b1>] lustre_get_qids+0x1d1/0x878 [fsfilt_ldiskfs]
      13:24:44: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:44: [<ffffffffa02dc70e>] fsfilt_ldiskfs_qids+0xe/0x10 [fsfilt_ldiskfs]
      13:24:44: [<ffffffffa08bce41>] qslave_recovery_main+0x1b1/0x580 [lquota]
      13:24:45: [<ffffffff810097cc>] ? __switch_to+0x1ac/0x320
      13:24:45: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:45: [<ffffffff8100c14a>] child_rip+0xa/0x20
      13:24:46: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:46: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:46: [<ffffffff8100c140>] ? child_rip+0x0/0x20
      13:24:46:Code: 0f 1f 84 00 00 00 00 00 49 8d 54 24 30 48 c7 c0 fc ff ff ff 48 89 55 c8 e9 e1 fe ff ff 0f 0b eb fe ba 01 00 00 00 e9 2a fe ff ff <0f> 0b eb fe 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 57 41 56 
      13:24:47:RIP  [<ffffffff8115e7b3>] cache_grow+0x313/0x320
      13:24:47: RSP <ffff88007c901ca0>
      13:24:47:---[ end trace 26c45d84a87d2dad ]---
      13:24:47:Kernel panic - not syncing: Fatal exception
      13:24:47:Pid: 16415, comm: qslave_recovd Tainted: G      D    ----------------   2.6.32-220.17.1.el6_lustre.x86_64 #1
      13:24:47:Call Trace:
      13:24:47: [<ffffffff814eccea>] ? panic+0x78/0x143
      13:24:47: [<ffffffff814f0e84>] ? oops_end+0xe4/0x100
      13:24:47: [<ffffffff8100f26b>] ? die+0x5b/0x90
      13:24:47: [<ffffffff814f0754>] ? do_trap+0xc4/0x160
      13:24:47: [<ffffffff8100ce35>] ? do_invalid_op+0x95/0xb0
      13:24:47: [<ffffffff8115e7b3>] ? cache_grow+0x313/0x320
      13:24:47: [<ffffffffa02df06d>] ? lustre_read_quota+0x6d/0xe0 [fsfilt_ldiskfs]
      13:24:47: [<ffffffff8100bedb>] ? invalid_op+0x1b/0x20
      13:24:48: [<ffffffff8115e7b3>] ? cache_grow+0x313/0x320
      13:24:48: [<ffffffff8115e9c2>] ? cache_alloc_refill+0x202/0x240
      13:24:48: [<ffffffffa0db1bf0>] ? cfs_alloc+0x30/0x60 [libcfs]
      13:24:48: [<ffffffff8115f6e9>] ? __kmalloc+0x1a9/0x220
      13:24:48: [<ffffffffa0db1bf0>] ? cfs_alloc+0x30/0x60 [libcfs]
      13:24:49: [<ffffffffa02e64b1>] ? lustre_get_qids+0x1d1/0x878 [fsfilt_ldiskfs]
      13:24:49: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:49: [<ffffffffa02dc70e>] ? fsfilt_ldiskfs_qids+0xe/0x10 [fsfilt_ldiskfs]
      13:24:49: [<ffffffffa08bce41>] ? qslave_recovery_main+0x1b1/0x580 [lquota]
      13:24:49: [<ffffffff810097cc>] ? __switch_to+0x1ac/0x320
      13:24:49: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:49: [<ffffffff8100c14a>] ? child_rip+0xa/0x20
      13:24:49: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:49: [<ffffffffa08bcc90>] ? qslave_recovery_main+0x0/0x580 [lquota]
      13:24:49: [<ffffffff8100c140>] ? child_rip+0x0/0x20
      

      Attachments

        Issue Links

          Activity

            People

              wc-triage WC Triage
              maloo Maloo
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: