[LU-9040] Soft lockup on CPU during lfsck Created: 19/Jan/17  Updated: 08/Sep/17  Resolved: 09/May/17

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.9.0, Lustre 2.10.0
Fix Version/s: Lustre 2.10.0

Type: Bug Priority: Critical
Reporter: Cliff White (Inactive) Assignee: nasf (Inactive)
Resolution: Fixed Votes: 0
Labels: soak
Environment:

Soak test cluster. Lustre 2.9.0 GA Release


Attachments: Text File lfsck_namespace.txt     File lola-7.console.20170208.txt.gz     Text File lola-8.lfsck.hang.stacks.txt     File lola7.dmesg     File lu-9040-soak-8.console.log.txt.gz     File lustre-log.1486672102.6086.txt.gz     Text File vmcore-dmesg.txt    
Issue Links:
Related
is related to LU-9952 soft lockup in osd_inode_iteration() ... Resolved
is related to LU-7270 sanity-scrub test_9: panic and reboot... Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

This looks like a few bugs we've seen before, but not enough like any specific one.

  • Started soak
  • Started and completed router delay fault
  • LFSCK triggered.
  • LFSCK dones not complete, runs > 1905s
    MGS/MDS0 has soft lockup on CPU.
    Jan 19 14:06:24 lola-8 kernel: BUG: soft lockup - CPU#9 stuck for 67s! [OI_scrub:19279]
    Jan 19 14:06:24 lola-8 kernel: Modules linked in: osp(U) mdd(U) lod(U) mdt(U) lfsck(U) mgs(U) mgc(U) osd_ldiskfs(U) lquota(U) lustre(U) lov(U) mdc(U) fid(U) lmv(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) sha512_generic crc32c_intel libcfs(U) ldiskfs(U) jbd2 8021q garp stp llc nfsd exportfs nfs lockd fscache auth_rpcgss nfs_acl sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm scsi_dh_rdac dm_round_robin dm_multipath microcode iTCO_wdt iTCO_vendor_support zfs(P)(U) zcommon(P)(U) znvpair(P)(U) spl(U) zlib_deflate zavl(P)(U) zunicode(P)(U) joydev sb_edac edac_core lpc_ich mfd_core i2c_i801 ioatdma sg igb dca i2c_algo_bit i2c_core ext3 jbd mbcache sd_mod crc_t10dif ahci isci libsas wmi mpt2sas scsi_transport_sas raid_class mlx4_ib ib_sa ib_mad ib_core ib_addr ipv6 mlx4_en ptp pps_core mlx4_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
    Jan 19 14:06:24 lola-8 kernel: CPU 9
    Jan 19 14:06:24 lola-8 kernel: Modules linked in: osp(U) mdd(U) lod(U) mdt(U) lfsck(U) mgs(U) mgc(U) osd_ldiskfs(U) lquota(U) lustre(U) lov(U) mdc(U) fid(U) lmv(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) sha512_generic crc32c_intel libcfs(U) ldiskfs(U) jbd2 8021q garp stp llc nfsd exportfs nfs lockd fscache auth_rpcgss nfs_acl sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm scsi_dh_rdac dm_round_robin dm_multipath microcode iTCO_wdt iTCO_vendor_support zfs(P)(U) zcommon(P)(U) znvpair(P)(U) spl(U) zlib_deflate zavl(P)(U) zunicode(P)(U) joydev sb_edac edac_core lpc_ich mfd_core i2c_i801 ioatdma sg igb dca i2c_algo_bit i2c_core ext3 jbd mbcache sd_mod crc_t10dif ahci isci libsas wmi mpt2sas scsi_transport_sas raid_class mlx4_ib ib_sa ib_mad ib_core ib_addr ipv6 mlx4_en ptp pps_core mlx4_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
    Jan 19 14:06:24 lola-8 kernel:
    Jan 19 14:06:24 lola-8 kernel: Pid: 19279, comm: OI_scrub Tainted: P           --L------------    2.6.32-573.26.1.el6_lustre.x86_64 #1 Intel Corporation S2600GZ ........../S2600GZ
    Jan 19 14:06:24 lola-8 kernel: RIP: 0010:[<ffffffffa10d1e2f>]  [<ffffffffa10d1e2f>] osd_scrub_exec+0xf/0x1dc0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: RSP: 0018:ffff88049472bc20  EFLAGS: 00000282
    Jan 19 14:06:24 lola-8 kernel: RAX: 0000000000000004 RBX: ffff88049472bd00 RCX: ffff8803ac48ced0
    Jan 19 14:06:24 lola-8 kernel: RDX: ffff88049472bda0 RSI: ffff880216078000 RDI: ffff88038a74d000
    Jan 19 14:06:24 lola-8 kernel: RBP: ffffffff8100bc0e R08: ffff88049472bdcf R09: 0000000000000004
    Jan 19 14:06:24 lola-8 kernel: R10: ffff8803ac48cec0 R11: ffff8803ac48ced0 R12: ffffffff81fe6920
    Jan 19 14:06:24 lola-8 kernel: R13: ffff8802a5151210 R14: ffff88038a74d000 R15: ffffffffa08549a8
    Jan 19 14:06:24 lola-8 kernel: FS:  0000000000000000(0000) GS:ffff88044e420000(0000) knlGS:0000000000000000
    Jan 19 14:06:24 lola-8 kernel: CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
    Jan 19 14:06:24 lola-8 kernel: CR2: 0000003903673b90 CR3: 0000000001a8d000 CR4: 00000000000407e0
    Jan 19 14:06:24 lola-8 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Jan 19 14:06:24 lola-8 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Jan 19 14:06:24 lola-8 kernel: Process OI_scrub (pid: 19279, threadinfo ffff880494728000, task ffff880802c30ab0)
    Jan 19 14:06:24 lola-8 kernel: Stack:
    Jan 19 14:06:24 lola-8 kernel: ffff880216078000 ffff88038a74d000 ffffffffffffff02 ffffffffa10d3c9a
    Jan 19 14:06:24 lola-8 kernel: <d> 0000000000000010 0000000000000202 ffff88049472bc60 0000000000000018
    Jan 19 14:06:24 lola-8 kernel: <d> ffff8803ac48cee0 0000000000000001 ffff8802160790e0 ffff880216079000
    Jan 19 14:06:24 lola-8 kernel: Call Trace:
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d3c9a>] ? osd_scrub_next+0xba/0x4b0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d6ec9>] ? osd_inode_iteration+0x4c9/0xd80 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff81539b0e>] ? thread_return+0x4e/0x7d0
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d1e20>] ? osd_scrub_exec+0x0/0x1dc0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d3be0>] ? osd_scrub_next+0x0/0x4b0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d8905>] ? osd_scrub_main+0x885/0xec0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff81067650>] ? default_wake_function+0x0/0x20
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d8080>] ? osd_scrub_main+0x0/0xec0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff810a138e>] ? kthread+0x9e/0xc0
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff8100c28a>] ? child_rip+0xa/0x20
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff810a12f0>] ? kthread+0x0/0xc0
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff8100c280>] ? child_rip+0x0/0x20
    Jan 19 14:06:24 lola-8 kernel: Code: 00 00 00 e8 54 fe 7b ff e9 14 ff ff ff 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 81 ec e0 00 00 00 48 89 5d d8 <4c> 89 65 e0 4c 89 6d e8 4c 89 75 f0 4c 89 7d f8 0f 1f 44 00 00
    an 19 14:06:24 lola-8 kernel: Code: 00 00 00 e8 54 fe 7b ff e9 14 ff ff ff 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 81 ec e0 00 00 00 48 89 5d d8 <4c> 89 65 e0 4c 89 6d e8 4c 89 75 f0 4c 89 7d f8 0f 1f 44 00 00
    Jan 19 14:06:24 lola-8 kernel: Call Trace:
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d3c9a>] ? osd_scrub_next+0xba/0x4b0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d6ec9>] ? osd_inode_iteration+0x4c9/0xd80 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff81539b0e>] ? thread_return+0x4e/0x7d0
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d1e20>] ? osd_scrub_exec+0x0/0x1dc0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d3be0>] ? osd_scrub_next+0x0/0x4b0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d8905>] ? osd_scrub_main+0x885/0xec0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff81067650>] ? default_wake_function+0x0/0x20
    Jan 19 14:06:24 lola-8 kernel: [<ffffffffa10d8080>] ? osd_scrub_main+0x0/0xec0 [osd_ldiskfs]
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff810a138e>] ? kthread+0x9e/0xc0
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff8100c28a>] ? child_rip+0xa/0x20
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff810a12f0>] ? kthread+0x0/0xc0
    Jan 19 14:06:24 lola-8 kernel: [<ffffffff8100c280>] ? child_rip+0x0/0x20
    

    Attempting to get a stack dump, but not going well. System is available is anyone wishes to have a look



 Comments   
Comment by Cliff White (Inactive) [ 19/Jan/17 ]

System crashed finally. Crash dump available on Lola:/scratch/ldev-572
vmcore dmesg attached

Comment by Joseph Gmitter (Inactive) [ 20/Jan/17 ]

Hi Fan Yong,

Would you be able to have a look at this issue?

Thanks.
Joe

Comment by Cliff White (Inactive) [ 23/Jan/17 ]

Re-tested with tip of master lustre: 2.9.51_45_g3b3eeeb - exact same problem, Full soft-lockup of lola-8. Will leave system in this state until further notice

Comment by Gerrit Updater [ 26/Jan/17 ]

Fan Yong (fan.yong@intel.com) uploaded a new patch: https://review.whamcloud.com/25105
Subject: LU-9040 scrub: handle group boundary properly
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 2c5c849af65294b759e12cf986daecf58fa296d6

Comment by nasf (Inactive) [ 04/Feb/17 ]

I just filed a test fail on master as LU-7270. On further inspection I'm thinking it may be an instance of this bug, LU-9040. Since you own this can you take a look please?
https://testing.hpdd.intel.com/test_sets/c9714ef0-ea3a-11e6-9a08-5254006e85c2

I think you are right. I seems another instance of LU-9040.

Comment by nasf (Inactive) [ 05/Feb/17 ]

+1 on master:
https://testing.hpdd.intel.com/test_sets/d4fd6074-eb1b-11e6-af25-5254006e85c2

Comment by nasf (Inactive) [ 05/Feb/17 ]

+1 on master:
https://testing.hpdd.intel.com/test_sets/c231824c-eb14-11e6-b844-5254006e85c2

Comment by Cliff White (Inactive) [ 08/Feb/17 ]

2017-02-05 13:22 - soak test started

Lola-7 is failed over succeeded
2017-02-07 05:24:05,171:fsmgmt.fsmgmt:INFO oss_failover just completed
Some time after recovery, lola-7 completely locks up with timeouts.
System is too busy to send messages to syslog.
Final syslog message:

Feb  7 10:07:08 lola-7 kernel: Lustre: soaked-OST0005: Client 87f1907c-fc63-c971-77fc-7c3cea8d3e83 (at 192.168.1.113@o2ib100) reconnecting
Feb  7 10:07:08 lola-7 kernel: Lustre: Skipped 1 previous similar message

System was completely wedged, attempted to dump stacks via console and crash dump, but system was way too busy.

Comment by Cliff White (Inactive) [ 08/Feb/17 ]

Full console log attached.

Comment by Cliff White (Inactive) [ 09/Feb/17 ]

With LFSCK running, we are still seeing timeouts and lustre-log dumps on the MDS:

Feb  9 12:05:17 lola-8 kernel: LustreError: 5992:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) soaked-OST0011-osc-MDT0000: cannot cleanup orphans: rc = -22
Feb  9 12:05:17 lola-8 kernel: LustreError: 5992:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) Skipped 1799 previous similar messages
Feb  9 12:15:17 lola-8 kernel: LustreError: 5992:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) soaked-OST0011-osc-MDT0000: cannot cleanup orphans: rc = -22
Feb  9 12:15:17 lola-8 kernel: LustreError: 5992:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) Skipped 1799 previous similar messages
Feb  9 12:25:17 lola-8 kernel: LustreError: 5968:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) soaked-OST0005-osc-MDT0000: cannot cleanup orphans: rc = -22
Feb  9 12:25:17 lola-8 kernel: LustreError: 5968:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) Skipped 1797 previous similar messages
Feb  9 12:28:22 lola-8 kernel: LNet: Service thread pid 6086 was inactive for 400.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Feb  9 12:28:22 lola-8 kernel: LNet: Service thread pid 6086 completed after 400.00s. This indicates the system was overloaded (too many service threads, or there were not enough hardware resources).  
Feb  9 12:28:22 lola-8 kernel: Call Trace:
Feb  9 12:28:22 lola-8 kernel: [<ffffffff811c91ec>] ? __getblk+0x2c/0x2a0
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa10131b4>] ? do_get_write_access+0x3b4/0x520 [jbd2]
Feb  9 12:28:22 lola-8 kernel: [<ffffffff8153a9b2>] schedule_timeout+0x192/0x2e0
Feb  9 12:28:22 lola-8 kernel: [<ffffffff81089fa0>] ? process_timeout+0x0/0x10
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa14561a6>] osp_precreate_reserve+0x406/0x8e0 [osp]
Feb  9 12:28:22 lola-8 kernel: [<ffffffff81067650>] ? default_wake_function+0x0/0x20
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa144f59c>] osp_declare_object_create+0x19c/0x6a0 [osp]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa138aff5>] lod_sub_object_declare_create+0xd5/0x260 [lod]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa138314e>] lod_qos_declare_object_on+0xce/0x3e0 [lod]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa1386922>] lod_alloc_rr.clone.1+0x852/0xea0 [lod]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa10f2cf9>] ? osd_declare_qid+0x299/0x4d0 [osd_ldiskfs]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa1387d62>] lod_qos_prep_create+0xdf2/0x15b0 [lod]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa137fe82>] lod_declare_striped_object+0x162/0x820 [lod]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa13807a1>] lod_declare_object_create+0x261/0x570 [lod]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa13f72f6>] mdd_declare_object_create_internal+0x116/0x340 [mdd]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa13e6a56>] mdd_declare_create+0x56/0x790 [mdd]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa13f028b>] mdd_create+0x7db/0x17e0 [mdd]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa12bd4a8>] mdo_create+0x18/0x50 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa12c607f>] mdt_reint_open+0x226f/0x3230 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa096f2bc>] ? upcall_cache_get_entry+0x29c/0x880 [obdclass]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0974440>] ? lu_ucred+0x20/0x30 [obdclass]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa1291745>] ? mdt_ucred+0x15/0x20 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa12af01d>] mdt_reint_rec+0x5d/0x200 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa129c08b>] mdt_reint_internal+0x62b/0xa50 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa129c6a6>] mdt_intent_reint+0x1f6/0x440 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa129ac0e>] mdt_intent_policy+0x4be/0xc70 [mdt]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0b0c6d7>] ldlm_lock_enqueue+0x127/0x990 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0b3851b>] ldlm_handle_enqueue0+0x98b/0x14e0 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0bacbd1>] ? tgt_lookup_reply+0x31/0x190 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0bbe131>] tgt_enqueue+0x61/0x230 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0bbf2dc>] tgt_request_handle+0x8ec/0x1440 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0b6a7eb>] ptlrpc_server_handle_request+0x2eb/0xbd0 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa082bf0a>] ? lc_watchdog_touch+0x7a/0x190 [libcfs]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0b65859>] ? ptlrpc_wait_event+0xa9/0x2e0 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffff81067650>] ? default_wake_function+0x0/0x20
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0b6bb91>] ptlrpc_main+0xac1/0x18d0 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffffa0b6b0d0>] ? ptlrpc_main+0x0/0x18d0 [ptlrpc]
Feb  9 12:28:22 lola-8 kernel: [<ffffffff810a138e>] kthread+0x9e/0xc0
Feb  9 12:28:22 lola-8 kernel: [<ffffffff8100c28a>] child_rip+0xa/0x20
Feb  9 12:28:22 lola-8 kernel: [<ffffffff810a12f0>] ? kthread+0x0/0xc0
Feb  9 12:28:22 lola-8 kernel: [<ffffffff8100c280>] ? child_rip+0x0/0x20
Feb  9 12:28:22 lola-8 kernel:
Feb  9 12:28:22 lola-8 kernel: LustreError: dumping log to /tmp/lustre-log.1486672102.6086

Attaching lustre-log and lfsck_namespace to the ticket

Comment by nasf (Inactive) [ 10/Feb/17 ]

According to the log lfsck_namespace.txt, the LFSCK works well up to now.

As for the inactive thread at "osp_precreate_reserve+0x406". It was blocked at:

(gdb) l *osp_precreate_reserve+0x406
0x13306 is in osp_precreate_reserve (/root/Work/Lustre/snapshot_master_2/lustre-release/lustre/osp/osp_precreate.c:1466).
1461			if (cfs_time_aftereq(cfs_time_current(), expire)) {
1462				rc = -ETIMEDOUT;
1463				break;
1464			}
1465	
1466			l_wait_event(d->opd_pre_user_waitq,
1467				     osp_precreate_ready_condition(env, d), &lwi);
1468		}
1469	
1470		RETURN(rc);
(gdb)

That means the create operation is blocked because of OST precreate trouble. On the other hand, there are a lot of error message li the log lustre-log.1486672102.6086.txt like the following:

914463:00000004:00080000:0.0:1486672102.577507:0:5968:0:(osp_precreate.c:856:osp_precreate_cleanup_orphans()) soaked-OST0005-osc-MDT0000: going to cleanup orphans since [0x100050000:0x1db1b81:0x0]
914562:00000004:00020000:1.0:1486672102.577818:0:5968:0:(osp_precreate.c:914:osp_precreate_cleanup_orphans()) soaked-OST0005-osc-MDT0000: cannot cleanup orphans: rc = -22

So there seems some OST side trouble, related OSTs are OST0005/OST000b/OST0011. We the logs on the OSTs.

Comment by nasf (Inactive) [ 10/Feb/17 ]

A lot of threads on lola-7 lockup:

Pid: 12981, comm: kiblnd_sd_01_00 Tainted: P           -- ------------    2.6.32-573.26.1.el6_lustre.x86_64 #1 Intel Corporation S2600GZ ........../S2600GZ^M
RIP: 0010:[<ffffffff8153cfde>]  [<ffffffff8153cfde>] _spin_lock+0x1e/0x30^M
RSP: 0018:ffff88082ad2bc50  EFLAGS: 00000297^M
RAX: 0000000000003c32 RBX: ffff88082ad2bc50 RCX: 0000000000000000^M
RDX: 0000000000003c31 RSI: 0000000000000001 RDI: ffff880836869200^M
RBP: ffffffff8100bc0e R08: 0000000000000002 R09: 00f0933708000000^M
R10: 0000000000000002 R11: 0000000000000000 R12: 0000000000000002^M
R13: 00f0933708000000 R14: 0000000000000002 R15: 0000000000000000^M
FS:  0000000000000000(0000) GS:ffff88044e400000(0000) knlGS:0000000000000000^M
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b^M
CR2: 0000000003680298 CR3: 0000000001a8d000 CR4: 00000000000407e0^M
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
Process kiblnd_sd_01_00 (pid: 12981, threadinfo ffff88082ad28000, task ffff880837faf520)^M
Stack:^M
 ffff88082ad2bc90 ffffffffa080d7ab ffff88082ad2bc80 ffffffffa080d6c4^M
<d> ffff880707c42600 ffff880707c42600 ffff880826354000 ffff88041a4f3000^M
<d> ffff88082ad2bcd0 ffffffffa086f802 ffff880707c42600 ffff880707c42600^M
Call Trace:^M
 [<ffffffffa080d7ab>] ? cfs_percpt_lock+0x5b/0x110 [libcfs]^M
 [<ffffffffa080d6c4>] ? cfs_percpt_unlock+0x24/0xb0 [libcfs]^M
 [<ffffffffa086f802>] ? lnet_return_tx_credits_locked+0x2a2/0x4d0 [lnet]^M
 [<ffffffffa08631f1>] ? lnet_msg_decommit+0x2a1/0x710 [lnet]^M
 [<ffffffffa0864090>] ? lnet_finalize+0x1e0/0x730 [lnet]^M
 [<ffffffffa0d75be2>] ? kiblnd_tx_done+0x152/0x430 [ko2iblnd]^M
 [<ffffffff8105e4e3>] ? __wake_up+0x53/0x70^M
 [<ffffffffa0d7bfd5>] ? kiblnd_scheduler+0x9a5/0x1020 [ko2iblnd]^M
 [<ffffffff81067650>] ? default_wake_function+0x0/0x20^M
 [<ffffffffa0d7b630>] ? kiblnd_scheduler+0x0/0x1020 [ko2iblnd]^M
 [<ffffffff810a138e>] ? kthread+0x9e/0xc0^M
 [<ffffffff8100c28a>] ? child_rip+0xa/0x20^M
 [<ffffffff810a12f0>] ? kthread+0x0/0xc0^M
 [<ffffffff8100c280>] ? child_rip+0x0/0x20^M

It looks similar as the LU-8334, not related with the original OI scrub trouble.

Comment by nasf (Inactive) [ 10/Feb/17 ]

I have collected the latest logs on the lola-7. It is the host for OST0005/OST000b/OST0011. There are a lot of message as following:

LustreError: 43808:0:(ofd_dev.c:1688:ofd_create_hdl()) soaked-OST0005: invalid precreate request for 0x0:31136641, last_id 31142145. Likely MDS last_id corruption
LustreError: 43808:0:(ofd_dev.c:1688:ofd_create_hdl()) Skipped 1792 previous similar messages
LustreError: 25522:0:(ofd_dev.c:1688:ofd_create_hdl()) soaked-OST000b: invalid precreate request for 0x0:31160865, last_id 31167393. Likely MDS last_id corruption
LustreError: 25522:0:(ofd_dev.c:1688:ofd_create_hdl()) Skipped 1796 previous similar messages
LustreError: 44232:0:(ofd_dev.c:1688:ofd_create_hdl()) soaked-OST0011: invalid precreate request for 0x0:31206785, last_id 31208705. Likely MDS last_id corruption
LustreError: 44232:0:(ofd_dev.c:1688:ofd_create_hdl()) Skipped 1792 previous similar messages

Corresponding to the code:

static int ofd_create_hdl(struct tgt_session_info *tsi)
{
...
        /* former ofd_handle_precreate */
        if ((oa->o_valid & OBD_MD_FLFLAGS) &&
            (oa->o_flags & OBD_FL_DELORPHAN)) {
                exp->exp_filter_data.fed_lastid_gen = ofd->ofd_lastid_gen;
...
        } else {
...
                        if (diff < 0) {
                                /* LU-5648 */
                                CERROR("%s: invalid precreate request for "
                                       DOSTID", last_id %llu. "
                                       "Likely MDS last_id corruption\n",
                                       ofd_name(ofd), POSTID(&oa->o_oi),
(1688)==>                                       ofd_seq_last_oid(oseq));
                                GOTO(out, rc = -EINVAL);
                        }
...
}

That means the OSP does NOT set the "OBD_FL_DELORPHAN" flag when calls osp_precreate_cleanup_orphans(). Related changes are introduced by the patch https://review.whamcloud.com/25079 recently (2017-01-25).

Alex, would you please to check related patch? Thanks!

Comment by Alex Zhuravlev [ 10/Feb/17 ]

it does not set DELORPHAN as there is no reason to delete anything (IMO), we just need to recreate lost objects.
it's not clear how OST could get ahead as it's MDT telling how far to go..

Comment by Alex Zhuravlev [ 10/Feb/17 ]

Cliff, can you attach OST000b log please?

Comment by Gerrit Updater [ 10/Feb/17 ]

Fan Yong (fan.yong@intel.com) uploaded a new patch: https://review.whamcloud.com/25370
Subject: LU-9040 ofd: keep orphan OST-objects conditionally
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 92f00817a67e941eb131d8ea86027a677cc2ec28

Comment by Cliff White (Inactive) [ 10/Feb/17 ]

I believe lola-7 log (OST000b) was already uploaded - Should I try the latest patch?

Comment by nasf (Inactive) [ 11/Feb/17 ]

The patch https://review.whamcloud.com/25370 may be not the perfected solution for the latest issue, but before we have more reasonable solution,we can use it in the new build. On the other hand, the original patch https://review.whamcloud.com/25105 is used for fixing different issue. So both should be used in the new build.

Comment by Gerrit Updater [ 15/Feb/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/25105/
Subject: LU-9040 scrub: handle group boundary properly
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 1e42682aeadb0f80dd6e9bfd45abe17221b698bf

Comment by Cliff White (Inactive) [ 15/Feb/17 ]

Attaching console logs with stack dumps

Comment by nasf (Inactive) [ 18/Feb/17 ]

The new hung on MDT is different from the former one. The threads on the MDT hung at two places, one is at "lod_qos_prep_create+0xe12":

(gdb) l *lod_qos_prep_create+0xe12
0x2b232 is in lod_qos_prep_create (/root/Work/Lustre/L95/lustre-release/lustre/lod/lod_qos.c:279).
274		if (cfs_time_beforeq_64(max_age, obd->obd_osfs_age))
275			/* statfs data are quite recent, don't need to refresh it */
276			RETURN_EXIT;
277	
278		down_write(&lod->lod_qos.lq_rw_sem);
279		if (cfs_time_beforeq_64(max_age, obd->obd_osfs_age))
280			goto out;
281	
282		for (i = 0; i < osts->op_count; i++) {
283			idx = osts->op_array[i];

That means lod_qos_statfs_update() was blocked when "down_write(&lod->lod_qos.lq_rw_sem);". Because some other threads have taken such semaphore as read mode via lod_alloc_rr(). Unfortunately, these threads were blocked at "osp_precreate_reserve+0x406":

(gdb) l *osp_precreate_reserve+0x406
0x131d6 is in osp_precreate_reserve (/root/Work/Lustre/L95/lustre-release/lustre/osp/osp_precreate.c:1457).
1452			if (cfs_time_aftereq(cfs_time_current(), expire)) {
1453				rc = -ETIMEDOUT;
1454				break;
1455			}
1456	
1457			l_wait_event(d->opd_pre_user_waitq,
1458				     osp_precreate_ready_condition(env, d), &lwi);
1459		}
1460	

That means the pre-create was blocked for some reason. I found lola-6 one of the OSS node fall into soft lockup, a lot of message like following:

BUG: soft lockup - CPU#20 stuck for 67s! [ldlm_cn00_013:21784]
Modules linked in: osp(U) ofd(U) lfsck(U) ost(U) mgc(U) osd_zfs(U) lquota(U) lustre(U) lov(U) mdc(U) fid(U) lmv(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) sha512_generic crc32c_intel libcfs(U) nfsd exportfs nfs lockd fscache auth_rpcgss nfs_acl sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm dm_round_robin dm_multipath microcode iTCO_wdt iTCO_vendor_support zfs(P)(U) raid1 zcommon(P)(U) znvpair(P)(U) spl(U) zlib_deflate zavl(P)(U) zunicode(P)(U) sb_edac edac_core lpc_ich mfd_core joydev i2c_i801 ioatdma ses enclosure sg igb dca i2c_algo_bit i2c_core ext3 jbd mbcache sd_mod crc_t10dif ahci wmi isci libsas mpt2sas scsi_transport_sas raid_class mlx4_ib ib_sa ib_mad ib_core ib_addr ipv6 mlx4_en ptp pps_core mlx4_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
CPU 20 
Modules linked in: osp(U) ofd(U) lfsck(U) ost(U) mgc(U) osd_zfs(U) lquota(U) lustre(U) lov(U) mdc(U) fid(U) lmv(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) sha512_generic crc32c_intel libcfs(U) nfsd exportfs nfs lockd fscache auth_rpcgss nfs_acl sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm dm_round_robin dm_multipath microcode iTCO_wdt iTCO_vendor_support zfs(P)(U) raid1 zcommon(P)(U) znvpair(P)(U) spl(U) zlib_deflate zavl(P)(U) zunicode(P)(U) sb_edac edac_core lpc_ich mfd_core joydev i2c_i801 ioatdma ses enclosure sg igb dca i2c_algo_bit i2c_core ext3 jbd mbcache sd_mod crc_t10dif ahci wmi isci libsas mpt2sas scsi_transport_sas raid_class mlx4_ib ib_sa ib_mad ib_core ib_addr ipv6 mlx4_en ptp pps_core mlx4_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]

Pid: 21784, comm: ldlm_cn00_013 Tainted: P           --L------------    2.6.32-573.26.1.el6_lustre.x86_64 #1 Intel Corporation S2600GZ ........../S2600GZ
RIP: 0010:[<ffffffff8153cfe1>]  [<ffffffff8153cfe1>] _spin_lock+0x21/0x30
RSP: 0018:ffff880206b4bab0  EFLAGS: 00000297
RAX: 0000000000009bbf RBX: ffff880206b4bab0 RCX: 0000000000000000
RDX: 0000000000009ba1 RSI: 0000000000000001 RDI: ffff8808341b5340
RBP: ffffffff8100bc0e R08: 0b90000000000000 R09: 5c80000000000000
R10: 0000000000000001 R11: 00000000ffffffff R12: 0000000006b4ba40
R13: ffff880000021dd8 R14: 0000000000000000 R15: 000000028f70ac00
FS:  0000000000000000(0000) GS:ffff880038780000(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00007fea1359c000 CR3: 00000004144ec000 CR4: 00000000000407e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process ldlm_cn00_013 (pid: 21784, threadinfo ffff880206b48000, task ffff880274004040)
Stack:
 ffff880206b4baf0 ffffffffa081c7ab ffff8802cd94cb88 0000000000000000
<d> ffff880206b4baf0 ffff8804181ddc00 0000000000000000 ffff880835d0d6d8
<d> ffff880206b4bbd0 ffffffffa087f97f ffff880206b4bb70 0000000000000286
Call Trace:
 [<ffffffffa081c7ab>] ? cfs_percpt_lock+0x5b/0x110 [libcfs]
 [<ffffffffa087f97f>] ? lnet_send+0x65f/0x11d0 [lnet]
 [<ffffffff81149461>] ? kzfree+0x31/0x40
 [<ffffffffa0882aef>] ? LNetPut+0x2ff/0x810 [lnet]
 [<ffffffffa0b39ff4>] ? ptl_send_buf+0x194/0x580 [ptlrpc]
 [<ffffffffa0b3a67d>] ? ptlrpc_send_reply+0x29d/0x840 [ptlrpc]
 [<ffffffffa0b4f43e>] ? ptlrpc_at_check_timed+0xd4e/0x1330 [ptlrpc]
 [<ffffffffa0b52c70>] ? ptlrpc_main+0xba0/0x18d0 [ptlrpc]
 [<ffffffffa0b520d0>] ? ptlrpc_main+0x0/0x18d0 [ptlrpc]
 [<ffffffff810a138e>] ? kthread+0x9e/0xc0
 [<ffffffff8100c28a>] ? child_rip+0xa/0x20
 [<ffffffff810a12f0>] ? kthread+0x0/0xc0
 [<ffffffff8100c280>] ? child_rip+0x0/0x20
Code: 01 74 05 e8 e2 18 d6 ff c9 c3 55 48 89 e5 0f 1f 44 00 00 b8 00 00 01 00 f0 0f c1 07 0f b7 d0 c1 e8 10 39 c2 74 0e f3 90 0f b7 17 <eb> f5 83 3f 00 75 f4 eb df c9 c3 0f 1f 40 00 55 48 89 e5 0f 1f 

It looks like LU-8334.

Comment by Cliff White (Inactive) [ 01/Mar/17 ]

We will be migrating the soak platform to RHEL 7.3 this month, we will reproduce with that build. A newer kernel may change the picture.

Comment by Gerrit Updater [ 17/Mar/17 ]

Fan Yong (fan.yong@intel.com) uploaded a new patch: https://review.whamcloud.com/26059
Subject: LU-9040 scrub: handle group boundary properly (2)
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 263986ad76bedf44db3d50f81104d9fad0fad1c3

Comment by nasf (Inactive) [ 17/Mar/17 ]

Another import patch, please apply it and retry. Thanks!

Comment by Gerrit Updater [ 26/Mar/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26059/
Subject: LU-9040 scrub: handle group boundary properly (2)
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: 88c2664437b9110ee164aef176b1c126c23fdc1e

Comment by James Nunez (Inactive) [ 17/Apr/17 ]

Fan Yong,

There are some recent sanity-sec test 9 hangs that have a very similar stack trace to the one in this ticket, but have the patch for this ticket. Would you please review these failures are determine if they are indeed the same issue as stated in this ticket or if they are new failures?

If the below logs show that these failures are unrelated to this ticket, then I can open a new one.

Here are a few of the recent failures:
2017-04-09 - https://testing.hpdd.intel.com/test_sets/1fb1e996-1d1c-11e7-8920-5254006e85c2
2017-04-11 - https://testing.hpdd.intel.com/test_sets/da448a92-1ef3-11e7-9de9-5254006e85c2
2017-04-12 - https://testing.hpdd.intel.com/test_sets/d251faa0-1fab-11e7-b742-5254006e85c2
2017-04-17 - https://testing.hpdd.intel.com/test_sets/4ddd019c-2362-11e7-8920-5254006e85c2

Comment by Gerrit Updater [ 20/Apr/17 ]

Fan Yong (fan.yong@intel.com) uploaded a new patch: https://review.whamcloud.com/26751
Subject: LU-9040 scrub: detect dead loop scanning
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 3f492eaff2f51368f5d2323b1f17e3192014905b

Comment by Cliff White (Inactive) [ 21/Apr/17 ]

Testing this build on soak, have run lfsck 3 times, in all three cases lfsck is running 600 seconds (our timeout) and then being aborted by soak framework. This is some progress, as previously lfsck would not abort

Comment by Cliff White (Inactive) [ 24/Apr/17 ]

Changed to latest lustre-master build. It seems we are back to soft lockup from OI_scrub

[ 5964.665025] NMI watchdog: BUG: soft lockup - CPU#6 stuck for 23s! [OI_scrub:12743]^M
[ 5964.674913] Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) zfs(POE) zunicode(POE) zavl(POE) zcommon(POE) znvpair(POE) spl(OE) zlib_deflate lustre(OE) lmv(OE) mdc(OE) lov(OE) fid(OE) fld(OE) ko2iblnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) 8021q garp mrp stp llc rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm mlx4_ib ib_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel dm_round_robin ipmi_devintf lrw gf128mul glue_helper ablk_helper cryptd sb_edac ipmi_ssif iTCO_wdt ntb ipmi_si iTCO_vendor_support mei_me sg mei ioatdma edac_core ipmi_msghandler lpc_ich wmi pcspkr shpchp i2c_i801 dm_multipath dm_mod nfsd nfs_acl lockd grace auth_rpcgss sunrpc ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic mlx4_en mgag200 drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm igb ahci crct10dif_pclmul isci crct10dif_common ptp libahci crc32c_intel libsas pps_core drm mlx4_core mpt2sas libata dca raid_class i2c_algo_bit scsi_transport_sas devlink i2c_core fjes^M
[ 5964.813809] CPU: 6 PID: 12743 Comm: OI_scrub Tainted: P           OE  ------------   3.10.0-514.10.2.el7_lustre.x86_64 #1^M
[ 5964.827453] Hardware name: Intel Corporation S2600GZ ........../S2600GZ, BIOS SE5C600.86B.01.08.0003.022620131521 02/26/2013^M
[ 5964.841399] task: ffff88083fdc0fb0 ti: ffff8807e5e48000 task.ti: ffff8807e5e48000^M
[ 5964.851143] RIP: 0010:[<ffffffffa131fb46>]  [<ffffffffa131fb46>] ldiskfs_itable_unused_count+0x16/0x30 [ldiskfs]^M
[ 5964.863926] RSP: 0018:ffff8807e5e4bd10  EFLAGS: 00000246^M
[ 5964.871236] RAX: 0000000000000000 RBX: ffff8807e5e4bd57 RCX: ffff88042bacc000^M
[ 5964.880575] RDX: 00000000000072ce RSI: ffff880405ca2a00 RDI: ffff88042bace000^M
[ 5964.889892] RBP: ffff8807e5e4bd10 R08: ffff8807e5e4bd57 R09: 0000000000000004^M
[ 5964.899191] R10: ffff88042e199a80 R11: ffffea000fbc98c0 R12: ffff88042e199a80^M
[ 5964.908502] R13: 0000000000000018 R14: ffff8807e5e4bc80 R15: 0000000000000202^M
[ 5964.917766] FS:  0000000000000000(0000) GS:ffff88042e180000(0000) knlGS:0000000000000000^M
[ 5964.928090] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[ 5964.935780] CR2: 00007faa11571a90 CR3: 00000000019ba000 CR4: 00000000000407e0^M
[ 5964.945021] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
[ 5964.954229] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
[ 5964.963412] Stack:^M
[ 5964.966841]  ffff8807e5e4bdf0 ffffffffa13b2dfd ffffffffa13b17d0 ffff88017976346c^M
[ 5964.976321]  ffffffffa13ad700 ffff880820000000 ffff88042bace000 ffff880179763468^M
[ 5964.985788]  010000000000000c 0000000000000000 0000000000000000 ffff880179762000^M
[ 5964.995235] Call Trace:^M
[ 5964.999098]  [<ffffffffa13b2dfd>] osd_inode_iteration+0x21d/0xd90 [osd_ldiskfs]^M
[ 5965.008393]  [<ffffffffa13b17d0>] ? osd_ios_ROOT_scan+0x300/0x300 [osd_ldiskfs]^M
[ 5965.017668]  [<ffffffffa13ad700>] ? osd_preload_next+0xc0/0xc0 [osd_ldiskfs]^M
[ 5965.026648]  [<ffffffffa13b42d0>] osd_scrub_main+0x960/0xf30 [osd_ldiskfs]^M
[ 5965.035419]  [<ffffffff810c5080>] ? wake_up_state+0x20/0x20^M
[ 5965.042742]  [<ffffffffa13b3970>] ? osd_inode_iteration+0xd90/0xd90 [osd_ldiskfs]^M
[ 5965.052185]  [<ffffffff810b06ff>] kthread+0xcf/0xe0^M
[ 5965.058694]  [<ffffffff810b0630>] ? kthread_create_on_node+0x140/0x140^M
[ 5965.067050]  [<ffffffff81696c98>] ret_from_fork+0x58/0x90^M
[ 5965.074115]  [<ffffffff810b0630>] ? kthread_create_on_node+0x140/0x140^M
[ 5965.082428] Code: 07 0f b7 46 30 c1 e0 10 09 d0 5d c3 66 0f 1f 84 00 00 00 00 00 66 66 66 66 90 55 48 8b 8f a8 03 00 00 31 c0 0f b7 56 1c 48 89 e5 <48> 83 39 3f 76 07 0f b7 46 32 c1 e0 10 09 d0 5d c3 66 0f 1f 84 ^M
[ 5992.664704] NMI watchdog: BUG: soft lockup - CPU#6 stuck for 23s! [OI_scrub:12743]^M

I am going to attempt to dump stacks and crash dump the node (soak-8)

Comment by nasf (Inactive) [ 25/Apr/17 ]

Cliff, thanks for update the ticket. It is noticed that the CPU soft lockup comes back. I have made another patch https://review.whamcloud.com/26751 for that. Would you please to verify it in the lola test? Thanks!

Comment by Gerrit Updater [ 02/May/17 ]

Jinshan Xiong (jinshan.xiong@intel.com) uploaded a new patch: https://review.whamcloud.com/26922
Subject: LU-9040 scrub: detect dead loop scanning
Project: fs/lustre-release
Branch: pfl
Current Patch Set: 1
Commit: 533dd2f253c81e8061204da5c785dd9a27063108

Comment by Cliff White (Inactive) [ 02/May/17 ]

Created a build with patches https://review.whamcloud.com/#/c/26751/ and https://review.whamcloud.com/#/c/26893/

  • fails immediately on mount of MDT.
    soak-8 is currently spinning very hard, available for examination.
    60.530576] CPU: 13 PID: 12279 Comm: OI_scrub Tainted: P           OEL ------------   3.10.0-514.16.1.el7_lustre.x86_64 #1^M
    [ 6560.543976] Hardware name: Intel Corporation S2600GZ ........../S2600GZ, BIOS SE5C600.86B.01.08.0003.022620131521 02/26/2013^M
    [ 6560.557569] task: ffff8803fca11f60 ti: ffff8803fca24000 task.ti: ffff8803fca24000^M
    [ 6560.566978] RIP: 0010:[<ffffffffa13a5be9>]  [<ffffffffa13a5be9>] osd_scrub_next+0x9/0x3c0 [osd_ldiskfs]^M
    [ 6560.578543] RSP: 0018:ffff8803fca27d08  EFLAGS: 00000287^M
    [ 6560.585506] RAX: ffffffffa13a5be0 RBX: 0000000000000001 RCX: ffff8803fca27d98^M
    [ 6560.594492] RDX: ffff8803fca27d78 RSI: ffff88082a370000 RDI: ffff88082da75000^M
    [ 6560.603462] RBP: ffff8803fca27d08 R08: 0000000000000001 R09: 0000000000000004^M
    [ 6560.612418] R10: 000000002a2da701 R11: ffffea0020a8b680 R12: 000000002a2da701^M
    [ 6560.621356] R13: ffffea0020a8b680 R14: ffff8803fca27c70 R15: ffff8803fca27d08^M
    [ 6560.630283] FS:  0000000000000000(0000) GS:ffff88082d940000(0000) knlGS:0000000000000000^M
    [ 6560.640284] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
    [ 6560.647665] CR2: 00007f6a3b9ab000 CR3: 00000000019be000 CR4: 00000000000407e0^M
    [ 6560.656614] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
    [ 6560.665553] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
    [ 6560.674488] Stack:^M
    [ 6560.677690]  ffff8803fca27df0 ffffffffa13ab1c9 ffffffffa13a9b50 ffffffffa13a5be0^M
    [ 6560.686960]  ffff88082a370000 00000000810d354f ffff88082da75000 ffff88082a371468^M
    [ 6560.696224]  0000000020000000 0100000000200163 0000000000000000 0000000000000000^M
    [ 6560.705485] Call Trace:^M
    [ 6560.709202]  [<ffffffffa13ab1c9>] osd_inode_iteration+0x2a9/0xcc0 [osd_ldiskfs]^M
    [ 6560.718370]  [<ffffffffa13a9b50>] ? osd_ios_ROOT_scan+0x300/0x300 [osd_ldiskfs]^M
    [ 6560.727525]  [<ffffffffa13a5be0>] ? osd_preload_next+0xb0/0xb0 [osd_ldiskfs]^M
    [ 6560.736388]  [<ffffffffa13ac540>] osd_scrub_main+0x960/0xf30 [osd_ldiskfs]^M
    [ 6560.745052]  [<ffffffff810c54c0>] ? wake_up_state+0x20/0x20^M
    [ 6560.752248]  [<ffffffffa13abbe0>] ? osd_inode_iteration+0xcc0/0xcc0 [osd_ldiskfs]^M
    [ 6560.761582]  [<ffffffff810b0a4f>] kthread+0xcf/0xe0^M
    [ 6560.768004]  [<ffffffff810b0980>] ? kthread_create_on_node+0x140/0x140^M
    [ 6560.776269]  [<ffffffff81697318>] ret_from_fork+0x58/0x90^M
    [ 6560.783271]  [<ffffffff810b0980>] ? kthread_create_on_node+0x140/0x140^M
    [ 6560.791526] Code: 48 8d 4c 02 10 48 01 c2 e8 25 fc ff ff 48 83 c4 08 5b 41 5c 41 5d 41 5e 5d c3 0f 1f 84 00 00 00 00 00 66 66 66 66 90 55 48 89 e5 <41> 57 49 89 f7 41 56 41 55 49 89 fd 41 54 49 89 d4 53 48 89 cb ^M
    [ 6588.383881] NMI watchdog: BUG: soft lockup - CPU#13 stuck for 22s! [OI_scrub:12279]^
    
Comment by nasf (Inactive) [ 03/May/17 ]

The patch https://review.whamcloud.com/#/c/26751/ has been updated with more fixing.
Cliff, would please to try again? Thanks!

Comment by Gerrit Updater [ 09/May/17 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch https://review.whamcloud.com/26751/
Subject: LU-9040 scrub: detect dead loop scanning
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: ad75291c70e67acb6fd607e373be045e9d16d3fc

Comment by Peter Jones [ 09/May/17 ]

Landed for 2.10

Generated at Sat Feb 10 02:22:44 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.