Details
-
Bug
-
Resolution: Fixed
-
Major
-
Lustre 2.4.0, Lustre 2.5.0, Lustre 2.6.0
-
None
-
b2_4 branch - 2.6.32-358.2.1 kernel
-
3
-
10008
Description
while testing problems with mount lustre in 2.4 i found a panic with shutdown
[root@rhel6-64 utils]# ../utils/mkfs.lustre --reformat --mgs --mdt --fsname=wwtest --mgsnode=rhel6-64.shadowland@tcp --index=0 --quiet --backfstype=ldiskfs --param sys.timeout=300 --param lov.stripesize=1048576 --param lov.stripecount=1 --device-size=0 --verbose /dev/sdb1 Permanent disk data: Target: wwtest:MDT0000 Index: 0 Lustre FS: wwtest Mount type: ldiskfs Flags: 0x65 (MDT MGS first_time update ) Persistent mount opts: user_xattr,errors=remount-ro Parameters: mgsnode=192.168.69.5@tcp sys.timeout=300 lov.stripesize=1048576 lov.stripecount=1 device size = 2048MB formatting backing filesystem ldiskfs on /dev/sdb1 target name wwtest:MDT0000 4k blocks 524540 options -J size=80 -I 512 -i 2048 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F mkfs_cmd = mke2fs -j -b 4096 -L wwtest:MDT0000 -J size=80 -I 512 -i 2048 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F /dev/sdb1 524540 Writing CONFIGS/mountdata [root@rhel6-64 utils]# ../utils/mkfs.lustre --reformat --mgs --mdt --fsname=wwtest --mgsnode=rhel6-64.shadowland@tcp --index=0 --quiet --backfstype=ldiskfs --param sys.timeout=300 --param lov.stripesize=1048576 --param lov.stripecount=1 --device-size=0 --verbose /dev/sdb1 ^C [root@rhel6-64 utils]# mount -t lustre -o nosvc,abort_recov,svname=rhel6-64.shadowland /dev/sdb1 /mnt/mdt [root@rhel6-64 utils]# umount -f /mnt/mdt Timeout, server not responding.
console backtrace
general protection fault: 0000 [#1] SMP
last sysfs file: /sys/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/target2:0:1/2:0:1:0/block/sdb/queue/max_sectors_kb
CPU 7
Modules linked in: ofd osp lod ost mdt osd_ldiskfs fsfilt_ldiskfs ldiskfs exportfs mdd mgs lquota jbd mgc fid fld ptlrpc obdclass lvfs ksocklnd lnet sha512_generic sha256_generic crc32c_inte
l libcfs nfs lockd auth_rpcgss nfs_acl sunrpc cachefiles fscache(T) ib_ipoib ib_cm ipv6 ib_uverbs ib_umad mlx4_ib ib_sa ib_mad ib_core mlx4_en mlx4_core dm_mirror dm_region_hash dm_log dm_mo
d ppdev vmware_balloon parport_pc parport vmxnet3 i2c_piix4 i2c_core sg shpchp ext4 mbcache jbd2 sd_mod crc_t10dif sr_mod cdrom vmw_pvscsi pata_acpi ata_generic ata_piix [last unloaded: lmv]
Pid: 10144, comm: umount Tainted: G --------------- T 2.6.32-358.2.1.el6 #0 VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform
RIP: 0010:[<ffffffffa056aacb>] [<ffffffffa056aacb>] lprocfs_remove_nolock+0x3b/0x100 [obdclass]
RSP: 0018:ffff880101e21a58 EFLAGS: 00010202
RAX: ffff88011954c100 RBX: 6b6b6b6b6b6b6b6b RCX: 0000000000000000
RDX: 0000000000000001 RSI: ffffffffa056acdd RDI: ffff880138742580
RBP: ffff880101e21a88 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: 0a6e776f64747568 R12: 6b6b6b6b6b6b6b6b
R13: ffff8801387426d8 R14: 6b6b6b6b6b6b6b6b R15: 0000000000000000
FS: 00007f77cb076740(0000) GS:ffff88002cc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f77ca7961b0 CR3: 0000000102102000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process umount (pid: 10144, threadinfo ffff880101e20000, task ffff88011954c100)
Stack:
ffffffffa056acdd ffff8801387426d8 ffff880138742580 ffff880101e21b28
<d> ffff8801387426d8 ffff880101e21b28 ffff880101e21aa8 ffffffffa056ace5
<d> ffff8801387424f0 ffff8801387424f0 ffff880101e21ad8 ffffffffa0a18cb0
Call Trace:
[<ffffffffa056acdd>] ? lprocfs_remove+0x1d/0x40 [obdclass]
[<ffffffffa056ace5>] lprocfs_remove+0x25/0x40 [obdclass]
[<ffffffffa0a18cb0>] qsd_fini+0x80/0x460 [lquota]
[<ffffffffa0bbd708>] osd_shutdown+0x38/0xe0 [osd_ldiskfs]
[<ffffffffa0bc52b9>] osd_device_fini+0x129/0x190 [osd_ldiskfs]
[<ffffffffa058b437>] class_cleanup+0x577/0xda0 [obdclass]
[<ffffffffa0560f6c>] ? class_name2dev+0x7c/0xe0 [obdclass]
[<ffffffffa058cd1c>] class_process_config+0x10bc/0x1c80 [obdclass]
[<ffffffffa03f5e58>] ? libcfs_log_return+0x28/0x40 [libcfs]
[<ffffffffa0586741>] ? lustre_cfg_new+0x391/0x7e0 [obdclass]
[<ffffffffa058da59>] class_manual_cleanup+0x179/0x6e0 [obdclass]
[<ffffffffa03f5e58>] ? libcfs_log_return+0x28/0x40 [libcfs]
[<ffffffffa0bc6404>] osd_obd_disconnect+0x174/0x1e0 [osd_ldiskfs]
[<ffffffffa058fa8e>] lustre_put_lsi+0x17e/0xe20 [obdclass]
[<ffffffffa05982a8>] lustre_common_put_super+0x5d8/0xc20 [obdclass]
[<ffffffffa05c12ba>] server_put_super+0x1ca/0xe60 [obdclass]
[<ffffffff811b136a>] ? invalidate_inodes+0xfa/0x180
[<ffffffff81196afb>] generic_shutdown_super+0x5b/0xe0
[<ffffffff81196be6>] kill_anon_super+0x16/0x60
[<ffffffff8119737f>] ? deactivate_super+0x4f/0x80
[<ffffffffa058f8b6>] lustre_kill_super+0x36/0x60 [obdclass]
[<ffffffff81197387>] deactivate_super+0x57/0x80
[<ffffffff811b5acf>] mntput_no_expire+0xbf/0x110
[<ffffffff811b654b>] sys_umount+0x7b/0x3a0
[<ffffffff81531b48>] ? lockdep_sys_exit_thunk+0x35/0x67
[<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Code: 1f 44 00 00 48 8b 1f 48 85 db 74 51 48 c7 07 00 00 00 00 4c 8b 73 48 4d 85 f6 75 0f e9 9a 00 00 00 0f 1f 80 00 00 00 00 4c 89 e3 <4c> 8b 63 50 4d 85 e4 75 f4 4c 8b 6b 08 4c 8b 63 48 4c 89 ef e8
RIP [<ffffffffa056aacb>] lprocfs_remove_nolock+0x3b/0x100 [obdclass]
Attachments
Issue Links
- is duplicated by
-
LU-4385 replay-single test 61d causes oops in osd_device_fini()
-
- Resolved
-
Landed for 2.6