[LU-8370] ASSERTION( lur->lur_hdr.lrh_len <= ctxt->loc_chunk_size ) Created: 06/Jul/16  Updated: 14/Jun/18  Resolved: 11/Aug/16

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.8.0
Fix Version/s: Lustre 2.9.0

Type: Bug Priority: Critical
Reporter: Christopher Morrone Assignee: Lai Siyao
Resolution: Fixed Votes: 0
Labels: llnl

Attachments: File jet13.log    
Issue Links:
Blocker
Related
is related to LU-7800 Panic during recovery of soak-test. Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

We just had several MDS nodes crash in our 2.8.0 DNE testbed with the following assertion:

2016-07-05 17:13:18 [421811.048147] LustreError: 86030:0:(update_trans.c:275:sub_updates_write()) ASSERTION( lur->lur_hdr.lrh_len <= ctxt->loc_chunk_size ) failed:
2016-07-05 17:13:18 [421811.064551] LustreError: 86030:0:(update_trans.c:275:sub_updates_write()) LBUG

And here is a bit more of the console log showing some events leading up to the assertion:

2016-07-05 17:00:01 [421013.769026] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
2016-07-05 17:00:14 [421027.108636] Lustre: 13742:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1467763108/real 1467
2016-07-05 17:00:14 [421027.143423] Lustre: 13742:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 2 previous similar messages
2016-07-05 17:00:14 [421027.155156] Lustre: lquake-MDT0004-osp-MDT0000: Connection to lquake-MDT0004 (at 172.19.1.115@o2ib100) was lost; in progress operations using thi
2016-07-05 17:01:01 [421073.879154] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
2016-07-05 17:01:41 [421113.695183] Lustre: lquake-MDT0000: haven't heard from client lquake-MDT0000-lwp-MDT0004_UUID (at 172.19.1.115@o2ib100) in 227 seconds. I think i
2016-07-05 17:01:41 [421113.721418] Lustre: Skipped 2 previous similar messages
2016-07-05 17:04:05 [421258.363863] Lustre: lquake-MDT000b-osp-MDT0000: Connection to lquake-MDT000b (at 172.19.1.122@o2ib100) was lost; in progress operations using thi
2016-07-05 17:04:13 [421266.343370] Lustre: 121411:0:(service.c:1336:ptlrpc_at_send_early_reply()) @@@ Couldn't add any time (5/5), not sending early reply
2016-07-05 17:04:13 [421266.343370]   req@ffff887a33193600 x1538602753601316/t0(0) o36->9131ef49-c8bd-b5a0-9096-bf612b95d242@192.168.128.136@o2ib18:423/0 lens 736/440 e
2016-07-05 17:04:13 [421266.379876] Lustre: 121411:0:(service.c:1336:ptlrpc_at_send_early_reply()) Skipped 6 previous similar messages
2016-07-05 17:05:03 [421316.429903] Lustre: lquake-MDT0000: Client 7f8c5dbb-5c65-9fd0-386e-4acb17a569c1 (at 192.168.128.173@o2ib18) reconnecting
2016-07-05 17:05:03 [421316.430031] Lustre: lquake-MDT0000: Connection restored to  (at 192.168.128.111@o2ib18)
2016-07-05 17:05:03 [421316.430034] Lustre: lquake-MDT0000: Connection restored to  (at 192.168.128.132@o2ib18)
2016-07-05 17:05:03 [421316.463306] Lustre: Skipped 6 previous similar messages
2016-07-05 17:05:21 [421334.445759] Lustre: 13738:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1467763414/real 1467
2016-07-05 17:05:21 [421334.481039] Lustre: 13738:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 13 previous similar messages
2016-07-05 17:05:29 [421341.717463] Lustre: lquake-MDT0000: haven't heard from client lquake-MDT000b-mdtlov_UUID (at 172.19.1.122@o2ib100) in 227 seconds. I think it's d
2016-07-05 17:05:29 [421341.743640] Lustre: Skipped 2 previous similar messages
2016-07-05 17:05:56 [421368.821761] LNetError: 13719:0:(o2iblnd_cb.c:3127:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 10 seconds
2016-07-05 17:05:56 [421368.834311] LNetError: 13719:0:(o2iblnd_cb.c:3190:kiblnd_check_conns()) Timed out RDMA with 172.19.1.115@o2ib100 (38): c: 0, oc: 0, rc: 8
2016-07-05 17:07:31 [421464.551678] Lustre: lquake-MDT0009-osp-MDT0000: Connection to lquake-MDT0009 (at 172.19.1.120@o2ib100) was lost; in progress operations using thi
2016-07-05 17:08:28 [421520.945572] LNetError: 13719:0:(o2iblnd_cb.c:3127:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 6 seconds
2016-07-05 17:08:28 [421520.957950] LNetError: 13719:0:(o2iblnd_cb.c:3190:kiblnd_check_conns()) Timed out RDMA with 172.19.1.120@o2ib100 (62): c: 0, oc: 0, rc: 8
2016-07-05 17:09:19 [421571.987133] LNetError: 13719:0:(o2iblnd_cb.c:3127:kiblnd_check_txs_locked()) Timed out tx: tx_queue, 7 seconds
2016-07-05 17:09:19 [421571.999514] LNetError: 13719:0:(o2iblnd_cb.c:3190:kiblnd_check_conns()) Timed out RDMA with 172.19.1.122@o2ib100 (38): c: 0, oc: 0, rc: 8
2016-07-05 17:09:23 [421576.250103] Lustre: MGS: haven't heard from client 859165fd-8f11-0394-d9d3-83cdf15c6ed5 (at 172.19.1.120@o2ib100) in 227 seconds. I think it's de
2016-07-05 17:09:23 [421576.276383] Lustre: Skipped 2 previous similar messages
2016-07-05 17:10:01 [421614.394252] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
2016-07-05 17:11:22 [421695.760858] Lustre: lquake-MDT000f-osp-MDT0000: Connection to lquake-MDT000f (at 172.19.1.126@o2ib100) was lost; in progress operations using thi
2016-07-05 17:12:58 [421790.944230] Lustre: MGS: haven't heard from client 4d1129cf-ad52-0cb8-c58a-0602f03cb346 (at 172.19.1.126@o2ib100) in 229 seconds. I think it's de
2016-07-05 17:12:58 [421790.970449] Lustre: Skipped 2 previous similar messages
2016-07-05 17:13:18 [421811.048147] LustreError: 86030:0:(update_trans.c:275:sub_updates_write()) ASSERTION( lur->lur_hdr.lrh_len <= ctxt->loc_chunk_size ) failed:
2016-07-05 17:13:18 [421811.064551] LustreError: 86030:0:(update_trans.c:275:sub_updates_write()) LBUG
2016-07-05 17:13:18 [421811.073795] Pid: 86030, comm: mdt02_055

The above log was from NID 172.19.1.111@o2ib100.

Note that NIDS 172.19.1.111 through 172.19.1.126 are MDS nodes (172.19.1.111 hosts the MGS as well). NIDs 172.19.1.127 through 172.19.1.130 are OSS nodes. (All @o2ib100).

It looks to me like the MDS nodes are all hitting this assertion one by one. So far 10 out of 16 have hit it and are down.

Here is another node's console messages (172.19.1.112@o2ib100):

2016-07-05 17:24:35 [366721.789837] LustreError: Skipped 589 previous similar messages
2016-07-05 17:25:48 [366795.199563] Lustre: 14874:0:(client.c:2063:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1467764693/real 0]  
req@ffff883e59bc2700 x1538678743470044/t0(0) o38->lquake-MDT000b-osp-MDT0001@172.19.1.122@o2ib100:24/4 lens 520/544 e 0 to 1 dl 1467764748 ref 2 fl Rpc:XN/0/ffffffff rc 
0/-1
2016-07-05 17:25:48 [366795.234284] Lustre: 14874:0:(client.c:2063:ptlrpc_expire_one_request()) Skipped 60 previous similar messages
2016-07-05 17:26:33 [366840.473716] Lustre: lquake-MDT0001: Client f3232a59-daa1-33eb-94f5-ae42e9a1f202 (at 192.168.128.143@o2ib18) reconnecting
2016-07-05 17:26:34 [366840.487104] Lustre: lquake-MDT0001: Connection restored to f3232a59-daa1-33eb-94f5-ae42e9a1f202 (at 192.168.128.143@o2ib18)
2016-07-05 17:26:39 [366846.241092] Lustre: lquake-MDT0008-osp-MDT0001: Connection to lquake-MDT0008 (at 172.19.1.119@o2ib100) was lost; in progress operations using thi
s service will wait for recovery to complete
2016-07-05 17:28:17 [366943.590377] Lustre: lquake-MDT0001: haven't heard from client lquake-MDT0008-mdtlov_UUID (at 172.19.1.119@o2ib100) in 227 seconds. I think it's d
ead, and I am evicting it. exp ffff887e999b2000, cur 1467764897 expire 1467764747 last 1467764670
2016-07-05 17:28:26 [366952.624959] LustreError: 14934:0:(update_trans.c:275:sub_updates_write()) ASSERTION( lur->lur_hdr.lrh_len <= ctxt->loc_chunk_size ) failed: 
2016-07-05 17:28:26 [366952.641374] LustreError: 14934:0:(update_trans.c:275:sub_updates_write()) LBUG
2016-07-05 17:28:26 [366952.650631] Pid: 14934, comm: mdt01_001


 Comments   
Comment by Christopher Morrone [ 06/Jul/16 ]

This hit again immediately after recovery (at least I think recovery had finished, all the other nodes did). So our testbed might be hosed until this is fixed. Or we writeconf or reformat or something.

Comment by Bruno Faccini (Inactive) [ 06/Jul/16 ]

Hello Chris,
Well, if I correctly understand that this problem is likely to reproduce during recovery upon MDS restart, may be you could enable the full Lustre debug mask on all MDSs, and then try to reproduce. Then you would need to save the full log on all MDTs staying up, when we could then extract the one for the crashing MDS from its crash-dump? BTW, do you already have crash-dumps from the previous occurrences?

Comment by Christopher Morrone [ 06/Jul/16 ]

Unfortunately, crash dumps are not currently working on this cluster. Its definitely a pain point for us. I'm hoping we'll get those working in the not-too-distant future.

Comment by Christopher Morrone [ 08/Jul/16 ]

No crash dumps yet, but I changed the LASSERT to an LASSERTF and got us the numbers that fail:

2016-07-07 17:21:37 [ 7753.358956] LustreError: 145766:0:(update_trans.c:277:sub_updates_write()) ASSERTION( lur->lur_hdr.lrh_len <= ctxt->loc_chunk_size ) failed: lrh_len 32776 loc_chunk_size 32768

Only 8 bytes over. So close!

Comment by Lai Siyao [ 08/Jul/16 ]

do you have a place to checkout your test code (2.8.0)? or the commit in master branch? this can help verify this is a fixed issue in latest master branch.

Comment by Christopher Morrone [ 08/Jul/16 ]

See your gerrit server, repository named "lustre-release-fe-llnl". Look for the highest numbered 2.8-llnl-preview* branch.

There is not a whole lot that is different code wise. Most of the change is in packaging and scripts.

But don't keep me in suspense, which commit in the latest master branch do you suspect fixes this?

Comment by Christopher Morrone [ 08/Jul/16 ]

And now the backtrace, in case that is helpful:

2016-07-08 14:15:20 [  337.504715] Pid: 26705, comm: mdt00_002
2016-07-08 14:15:20 [  337.509679] 
2016-07-08 14:15:20 [  337.509679] Call Trace:
2016-07-08 14:15:20 [  337.515418]  [<ffffffffa08857e3>] libcfs_debug_dumpstack+0x53/0x80 [libcfs]
2016-07-08 14:15:20 [  337.523920]  [<ffffffffa108c4cf>] sub_updates_write+0xb99/0xf2e [ptlrpc]
2016-07-08 14:15:20 [  337.532065]  [<ffffffff81179632>] ? __free_memcg_kmem_pages+0x22/0x50
2016-07-08 14:15:20 [  337.539953]  [<ffffffffa107ab4f>] top_trans_stop+0x62f/0x970 [ptlrpc]
2016-07-08 14:15:20 [  337.547819]  [<ffffffffa1307399>] lod_trans_stop+0x259/0x340 [lod]
2016-07-08 14:15:20 [  337.555394]  [<ffffffffa0e1f320>] ? linkea_add_buf+0x80/0x170 [obdclass]
2016-07-08 14:15:20 [  337.563543]  [<ffffffffa13958fa>] mdd_trans_stop+0x1a/0x1c [mdd]
2016-07-08 14:15:20 [  337.570904]  [<ffffffffa1380b08>] mdd_link+0x2e8/0x930 [mdd]
2016-07-08 14:15:20 [  337.577908]  [<ffffffffa127555a>] ? mdt_lookup_version_check+0xca/0x2f0 [mdt]
2016-07-08 14:15:20 [  337.586560]  [<ffffffffa127c08e>] mdt_reint_link+0xade/0xc30 [mdt]
2016-07-08 14:15:20 [  337.594113]  [<ffffffff816500a5>] ? mutex_lock+0x25/0x42
2016-07-08 14:15:20 [  337.600723]  [<ffffffffa1057ae6>] ? tgt_lookup_reply_by_xid+0x46/0x60 [ptlrpc]
2016-07-08 14:15:20 [  337.609471]  [<ffffffffa127e470>] mdt_reint_rec+0x80/0x210 [mdt]
2016-07-08 14:15:20 [  337.616845]  [<ffffffffa1261971>] mdt_reint_internal+0x5e1/0x990 [mdt]
2016-07-08 14:15:20 [  337.624811]  [<ffffffffa126b0d7>] mdt_reint+0x67/0x140 [mdt]
2016-07-08 14:15:20 [  337.631807]  [<ffffffffa10665d5>] tgt_request_handle+0x915/0x1320 [ptlrpc]
2016-07-08 14:15:20 [  337.640164]  [<ffffffffa10130cb>] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
2016-07-08 14:15:20 [  337.649379]  [<ffffffffa0892758>] ? lc_watchdog_touch+0x68/0x180 [libcfs]
2016-07-08 14:15:20 [  337.657650]  [<ffffffffa1010c9b>] ? ptlrpc_wait_event+0xab/0x350 [ptlrpc]
2016-07-08 14:15:20 [  337.665878]  [<ffffffff810bd4c2>] ? default_wake_function+0x12/0x20
2016-07-08 14:15:20 [  337.673534]  [<ffffffff810b33f8>] ? __wake_up_common+0x58/0x90
2016-07-08 14:15:20 [  337.680721]  [<ffffffffa1017170>] ptlrpc_main+0xa90/0x1db0 [ptlrpc]
2016-07-08 14:15:20 [  337.688368]  [<ffffffff81015588>] ? __switch_to+0xf8/0x4d0
2016-07-08 14:15:20 [  337.695159]  [<ffffffffa10166e0>] ? ptlrpc_main+0x0/0x1db0 [ptlrpc]
2016-07-08 14:15:20 [  337.702804]  [<ffffffff810a997f>] kthread+0xcf/0xe0
2016-07-08 14:15:20 [  337.708875]  [<ffffffff810a98b0>] ? kthread+0x0/0xe0
2016-07-08 14:15:20 [  337.715068]  [<ffffffff8165d658>] ret_from_fork+0x58/0x90
2016-07-08 14:15:20 [  337.721710]  [<ffffffff810a98b0>] ? kthread+0x0/0xe0
2016-07-08 14:15:20 [  337.727861] 
2016-07-08 14:15:20 [  337.730107] LustreError: 26705:0:(update_trans.c:279:sub_updates_write()) ASSERTION( lur->lur_hdr.lrh_len <= ctxt->loc_chunk_size ) failed: lrh_len 32776 loc_chunk_size 32768
Comment by Lai Siyao [ 12/Jul/16 ]

no, I'm reviewing the code.

Comment by Christopher Morrone [ 12/Jul/16 ]

The attached jet13.log file is from a node that is about to hit this assertion. I put a check for the same condition and a forced log dump in the code right before the assertion to get this.

Comment by Gerrit Updater [ 14/Jul/16 ]

wangdi (di.wang@intel.com) uploaded a new patch: http://review.whamcloud.com/21307
Subject: LU-8370 updates: size round for update record.
Project: fs/lustre-release
Branch: b2_8
Current Patch Set: 1
Commit: b7ebae2b3b799aec76be207f7370c2129010fa26

Comment by Di Wang [ 14/Jul/16 ]

Chris: it looks like when splitting the update records (by ctxt chunk_size), it should use cfs_size_round(). I just pushed a patch (http://review.whamcloud.com/21307), and also added some debugging msg there.

Could you please try it and tell me what are those console message if it hits LBUG again. Thanks.

Comment by Gerrit Updater [ 15/Jul/16 ]

Lai Siyao (lai.siyao@intel.com) uploaded a new patch: http://review.whamcloud.com/21334
Subject: LU-8370 dne: error in spliting update records
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: 707164929f73698e9ff25eccebc67425e55efe07

Comment by Lai Siyao [ 15/Jul/16 ]

hi Di, I make a patch to use the same function to estimate and set update llog size, do you think it's better?

Comment by Di Wang [ 15/Jul/16 ]

Yes, it is better. Thanks Lai.

Comment by Christopher Morrone [ 15/Jul/16 ]

I was out sick yesterday, but I'm back today. I'll give the 21334 patch a try today. Thanks!

Comment by Christopher Morrone [ 15/Jul/16 ]

I was able to start the stuck filesystem and complete recovery with the 21334 patch applied. It looks good so far.

Comment by Christopher Morrone [ 04/Aug/16 ]

Yesterday I updated to Patch Set 5 of change 21334 and installed it on our testbed. Last night we hit this NULL pointer dereference down pretty much the same call path:

2016-08-04 02:47:57 [115783.326226] BUG: unable to handle kernel NULL pointer dereference at           (null)
2016-08-04 02:47:57 [115783.335993] IP: [<ffffffff81651232>] down_write+0x32/0x43
2016-08-04 02:47:57 [115783.342892] PGD 0 
2016-08-04 02:47:57 [115783.345995] Oops: 0002 [#1] SMP 
2016-08-04 02:47:57 [115783.350434] Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_zfs(OE) lquota(OE) fid(OE) fld(OE
) ptlrpc(OE) obdclass(OE) rpcsec_gss_krb5 ko2iblnd(OE) lnet(OE) sha512_generic crypto_null libcfs(OE) nfsv3 ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad
 rdma_cm ib_cm iw_cm ib_sa ib_mad mlx5_ib iTCO_wdt iTCO_vendor_support intel_powerclamp coretemp intel_rapl kvm ib_core ib_addr pcspkr mlx5_core sb_ed
ac edac_core mei_me lpc_ich mei mfd_core ses enclosure ipmi_devintf zfs(POE) zunicode(POE) zavl(POE) zcommon(POE) sg znvpair(POE) spl(OE) zlib_deflate
 i2c_i801 ioatdma shpchp ipmi_si ipmi_msghandler acpi_power_meter acpi_cpufreq binfmt_misc nfsd nfs_acl ip_tables auth_rpcgss nfsv4 dns_resolver nfs l
ockd grace fscache dm_round_robin sd_mod crc_t10dif crct10dif_generic mgag200 syscopyarea sysfillrect crct10dif_pclmul sysimgblt crct10dif_common i2c_
algo_bit crc32_pclmul crc32c_intel drm_kms_helper mxm_wmi ghash_clmulni_intel ttm ahci aesni_intel lrw ixgbe gf128mul glue_helper libahci ablk_helper 
dca drm mpt3sas ptp libata i2c_core cryptd raid_class pps_core scsi_transport_sas mdio wmi sunrpc dm_mirror dm_region_hash dm_log scsi_transport_iscsi
 dm_multipath dm_mod
2016-08-04 02:47:57 [115783.476527] CPU: 14 PID: 162546 Comm: mdt03_008 Tainted: P        W  OE  ------------   3.10.0-327.22.2.2chaos.ch6.x86_64 #1
2016-08-04 02:47:57 [115783.489957] Hardware name: Intel Corporation S2600WTTR/S2600WTTR, BIOS SE5C610.86B.01.01.0016.033120161139 03/31/2016
2016-08-04 02:47:57 [115783.502713] task: ffff883f17ffb980 ti: ffff887ed50d4000 task.ti: ffff887ed50d4000
2016-08-04 02:47:57 [115783.511991] RIP: 0010:[<ffffffff81651232>]  [<ffffffff81651232>] down_write+0x32/0x43
2016-08-04 02:47:57 [115783.521679] RSP: 0018:ffff887ed50d78e0  EFLAGS: 00010246
2016-08-04 02:47:57 [115783.528512] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff887ed50d7fd8
2016-08-04 02:47:57 [115783.537390] RDX: ffffffff00000001 RSI: 000000000000002f RDI: ffffffff81886372
2016-08-04 02:47:57 [115783.546265] RBP: ffff887ed50d78e8 R08: ffff8870f59ada00 R09: ffff883f7ec07b00
2016-08-04 02:47:57 [115783.555130] R10: ffffffffa0f6f1ec R11: ffff88768502ba60 R12: ffff887f246b6a00
2016-08-04 02:47:57 [115783.563974] R13: 0000000000000000 R14: ffff88768502ba40 R15: ffff8870f59ada00
2016-08-04 02:47:57 [115783.572814] FS:  0000000000000000(0000) GS:ffff887f7eac0000(0000) knlGS:0000000000000000
2016-08-04 02:47:57 [115783.582719] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2016-08-04 02:47:57 [115783.589997] CR2: 0000000000000000 CR3: 0000000001962000 CR4: 00000000001407e0
2016-08-04 02:47:57 [115783.598840] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
2016-08-04 02:47:57 [115783.607672] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
2016-08-04 02:47:57 [115783.616493] Stack:
2016-08-04 02:47:57 [115783.619577]  0000000000000000 ffff887ed50d7938 ffffffffa0ca6f84 ffff88768502ba40
2016-08-04 02:47:57 [115783.628726]  ffff8878897d8000 ffff887ef6bcd2c0 ffff887f246b6a00 ffff887ef6bcd2c0
2016-08-04 02:47:57 [115783.637872]  ffff8878897d8000 ffff88768502ba40 ffff8870f59ada00 ffff887ed50d7978
2016-08-04 02:47:57 [115783.647011] Call Trace:
2016-08-04 02:47:57 [115783.650601]  [<ffffffffa0ca6f84>] llog_cat_add_rec+0x1d4/0x780 [obdclass]
2016-08-04 02:47:57 [115783.659046]  [<ffffffffa0c9fa3a>] llog_add+0x7a/0x1a0 [obdclass]
2016-08-04 02:47:57 [115783.666616]  [<ffffffffa0f6f1ec>] ? sub_updates_write+0x7f6/0xef8 [ptlrpc]
2016-08-04 02:47:57 [115783.675142]  [<ffffffffa0f6f5e3>] sub_updates_write+0xbed/0xef8 [ptlrpc]
2016-08-04 02:47:57 [115783.683642]  [<ffffffffa0f5dc0f>] top_trans_stop+0x62f/0x970 [ptlrpc]
2016-08-04 02:47:57 [115783.691833]  [<ffffffffa1213399>] lod_trans_stop+0x259/0x340 [lod]
2016-08-04 02:47:57 [115783.699562]  [<ffffffffa0d02380>] ? linkea_add_buf+0x80/0x170 [obdclass]
2016-08-04 02:47:57 [115783.707871]  [<ffffffffa12a18fa>] mdd_trans_stop+0x1a/0x1c [mdd]
2016-08-04 02:47:57 [115783.715384]  [<ffffffffa128cb08>] mdd_link+0x2e8/0x930 [mdd]
2016-08-04 02:47:57 [115783.722511]  [<ffffffffa0ee9522>] ? lustre_msg_get_versions+0x22/0xf0 [ptlrpc]
2016-08-04 02:47:57 [115783.731563]  [<ffffffffa115f08e>] mdt_reint_link+0xade/0xc30 [mdt]
2016-08-04 02:47:57 [115783.739437]  [<ffffffff81309f82>] ? strlcpy+0x42/0x60
2016-08-04 02:47:57 [115783.745864]  [<ffffffffa1161470>] mdt_reint_rec+0x80/0x210 [mdt]
2016-08-04 02:47:57 [115783.753352]  [<ffffffffa1144971>] mdt_reint_internal+0x5e1/0x990 [mdt]
2016-08-04 02:47:57 [115783.761426]  [<ffffffffa114e0d7>] mdt_reint+0x67/0x140 [mdt]
2016-08-04 02:47:57 [115783.768525]  [<ffffffffa0f49695>] tgt_request_handle+0x915/0x1320 [ptlrpc]
2016-08-04 02:47:57 [115783.776987]  [<ffffffffa0ef60cb>] ptlrpc_server_handle_request+0x21b/0xa90 [ptlrpc]
2016-08-04 02:47:57 [115783.786323]  [<ffffffffa0784758>] ? lc_watchdog_touch+0x68/0x180 [libcfs]
2016-08-04 02:47:57 [115783.794830]  [<ffffffffa0ef3c9b>] ? ptlrpc_wait_event+0xab/0x350 [ptlrpc]
2016-08-04 02:47:57 [115783.803172]  [<ffffffff810bd4c2>] ? default_wake_function+0x12/0x20
2016-08-04 02:47:57 [115783.810903]  [<ffffffff810b33f8>] ? __wake_up_common+0x58/0x90
2016-08-04 02:47:57 [115783.818150]  [<ffffffffa0efa170>] ptlrpc_main+0xa90/0x1db0 [ptlrpc]
2016-08-04 02:47:57 [115783.825862]  [<ffffffff81015588>] ? __switch_to+0xf8/0x4d0
2016-08-04 02:47:57 [115783.832686]  [<ffffffffa0ef96e0>] ? ptlrpc_register_service+0xe40/0xe40 [ptlrpc]
2016-08-04 02:47:57 [115783.841633]  [<ffffffff810a997f>] kthread+0xcf/0xe0
2016-08-04 02:47:57 [115783.847742]  [<ffffffff810a98b0>] ? kthread_create_on_node+0x140/0x140
2016-08-04 02:47:57 [115783.855707]  [<ffffffff8165d658>] ret_from_fork+0x58/0x90
2016-08-04 02:47:57 [115783.862363]  [<ffffffff810a98b0>] ? kthread_create_on_node+0x140/0x140
2016-08-04 02:47:57 [115783.870270] Code: d2 be 2f 00 00 00 48 89 e5 53 48 89 fb 48 c7 c7 72 63 88 81 e8 00 49 a6 ff e8 8b 10 00 00 48 ba 01 00 00 00 ff ff ff ff 48 89 d8 <f0> 48 0f c1 10 85 d2 74 05 e8 f0 e9 cb ff 5b 5d c3 55 48 89 e5 
2016-08-04 02:47:57 [115783.893325] RIP  [<ffffffff81651232>] down_write+0x32/0x43
2016-08-04 02:47:57 [115783.900088]  RSP <ffff887ed50d78e0>
2016-08-04 02:47:57 [115783.904604] CR2: 0000000000000000
Comment by Di Wang [ 04/Aug/16 ]

This looks like LU-7800. Could you please try this patch http://review.whamcloud.com/#/c/18542/ .

Comment by Christopher Morrone [ 09/Aug/16 ]

LU-7800 caused more problems then it was intended to solve. I opened LU-8489 to track the NULL pointer dereference in llog_cat_add_rec(), since Intel believes that it is a separate problem.

Comment by Gerrit Updater [ 11/Aug/16 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/21334/
Subject: LU-8370 dne: error in spliting update records
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: aa5d6bc2aa9abc745e6f590048e270c59265f699

Comment by Peter Jones [ 11/Aug/16 ]

Laned for 2.9. Residual issues being tracked under separate tickets

Generated at Sat Feb 10 02:16:57 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.