[LU-6765] mds-survey triggers crash via BUG:sleeping function called from invalid context Created: 24/Jun/15  Updated: 22/Dec/15  Resolved: 11/Aug/15

Status: Resolved
Project: Lustre
Component/s: None
Affects Version/s: Lustre 2.7.0
Fix Version/s: Lustre 2.8.0

Type: Bug Priority: Minor
Reporter: Olaf Faaland Assignee: Niu Yawei (Inactive)
Resolution: Fixed Votes: 0
Labels: llnl, patch
Environment:

Lustre 2.7.54
SPL/ZFS 0.6.4.1-1
TOSS kernel 2.6.32-504.8.1.2chaos.ch5.3.x86_64


Issue Links:
Related
is related to LU-6860 mds-survey test_2: MDS crashed Resolved
Severity: 3
Rank (Obsolete): 9223372036854775807

 Description   

Running mds-survey on a newly created file system triggers a crash and reboot.

The MDS and OSS nodes are up, lustre is running. Whether the filesystem is mounted on any clients has effect on the problem - it occurs either way. Backend is ZFS. mds-survey using all defaults; no environment variables set to control it.

shell shows almost nothing leading up to the crash:

[root@zwicky-lcrash-mds1:2015-06-24.3]# mds-survey
Wed Jun 24 12:35:05 PDT 2015 /usr/bin/mds-survey from zwicky-lcrash-mds1
mdt 1 file  100000 dir    4 thr    4 create

Console output is:

Lustre: Echo OBD driver; http://www.lustre.org/                                                                          
LustreError: 68263:0:(echo_client.c:1676:echo_md_lookup()) lookup MDT0000-tests: rc = -2                                 
LustreError: 68263:0:(echo_client.c:1875:echo_md_destroy_internal()) Can't find child MDT0000-tests: rc = -2             
Lustre: ctl-lcrash-MDT0000: super-sequence allocation rc = 0 [0x0000000200000400-0x0000000240000400):0:mdt               
BUG: sleeping function called from invalid context at arch/x86/mm/fault.c:1106                                           
in_atomic(): 0, irqs_disabled(): 1, pid: 68300, name: lctl                                                               
Pid: 68300, comm: lctl Tainted: P           ---------------    2.6.32-504.16.2.1chaos.ch5.3.x86_64 #1                    
Call Trace:                                                                                                              
 [<ffffffff8105e6aa>] ? __might_sleep+0xda/0x100                                                                         
 [<ffffffff8104e05b>] ? __do_page_fault+0x10b/0x510                                                                      
 [<ffffffffa07c0683>] ? libcfs_debug_vmsg2+0x5e3/0xbe0 [libcfs]                                                          
 [<ffffffff8153421e>] ? do_page_fault+0x3e/0xa0                                                                          
 [<ffffffff815315d5>] ? page_fault+0x25/0x30
 [<ffffffff8105d0e2>] ? task_rq_lock+0x42/0xa0
 [<ffffffff81065a3c>] ? try_to_wake_up+0x3c/0x3e0
 [<ffffffffa12dd263>] ? echo_object_free+0x2b3/0x460 [obdecho]
 [<ffffffff81065e35>] ? wake_up_process+0x15/0x20
 [<ffffffff8152efb2>] ? __mutex_unlock_slowpath+0x42/0x60
 [<ffffffff8152ef2b>] ? mutex_unlock+0x1b/0x20
 [<ffffffffa0968051>] ? lu_site_purge+0x411/0x500 [obdclass]
 [<ffffffffa0968581>] ? lu_object_limit+0x71/0x80 [obdclass]
 [<ffffffffa09686c0>] ? lu_object_find_try+0x130/0x260 [obdclass]
 [<ffffffffa09688a1>] ? lu_object_find_at+0xb1/0xe0 [obdclass]
 [<ffffffffa07bd2b8>] ? libcfs_log_return+0x28/0x40 [libcfs]
 [<ffffffffa12292f1>] ? mdd_lookup+0x111/0x180 [mdd]
 [<ffffffffa12dea33>] ? echo_md_create_internal+0x153/0x640 [obdecho]
 [<ffffffffa12e8bb2>] ? echo_md_handler+0x1302/0x1860 [obdecho]
 [<ffffffffa12ea98c>] ? echo_client_iocontrol+0x187c/0x29e0 [obdecho]
 [<ffffffff8113ca91>] ? lru_cache_add_lru+0x21/0x40
 [<ffffffff8115b2fd>] ? page_add_new_anon_rmap+0x9d/0xf0
 [<ffffffff81176e8c>] ? __kmalloc+0x22c/0x240
 [<ffffffffa093131c>] ? class_handle_ioctl+0x165c/0x21e0 [obdclass]
 [<ffffffffa09182ab>] ? obd_class_ioctl+0x4b/0x190 [obdclass]
 [<ffffffff811a5882>] ? vfs_ioctl+0x22/0xa0
 [<ffffffff811a5ea4>] ? do_vfs_ioctl+0x84/0x5e0
 [<ffffffff811a6481>] ? sys_ioctl+0x81/0xa0
 [<ffffffff8100b0b2>] ? system_call_fastpath+0x16/0x1b


 Comments   
Comment by Olaf Faaland [ 24/Jun/15 ]

Crash dump is available if there is information you want from it.

Comment by Olaf Faaland [ 24/Jun/15 ]

Perhaps this is a dupe of https://jira.hpdd.intel.com/browse/LU-5747

Comment by Olaf Faaland [ 24/Jun/15 ]

There was a second BUG entry in the console output:

2015-06-24 12:35:07 BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
2015-06-24 12:35:07 IP: [<ffffffff8105d0e2>] task_rq_lock+0x42/0xa0
2015-06-24 12:35:07 PGD fd61ea067 PUD fd61eb067 PMD 0
2015-06-24 12:35:07 Oops: 0000 [#1] SMP
2015-06-24 12:35:07 last sysfs file: /sys/devices/pci0000:00/0000:00:03.0/0000:06:00.0/host0/port-0:0/expander-0:0/port-0:0:2/end_device-0:0:2/target0:0:2/0:0:2:0/state
2015-06-24 12:35:07 CPU 0
2015-06-24 12:35:07 Modules linked in: obdecho(U) osp(U) mdd(U) lod(U) mdt(U) lfsck(U) mgs(U) mgc(U) osd_zfs(U) lquota(U) lustre(U) lov(U) mdc(U) fid(U) lmv(U) fld(U) ptlrpc(U) obdclass(U) acpi_cpufreq freq_table mperf ko2iblnd(U) lnet(U) sha512_generic crc32c_intel libcfs(U) autofs4 ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm mlx4_ib ib_sa ib_mad ib_core ib_addr dm_mirror dm_region_hash dm_log dm_round_robin dm_multipath dm_mod vhost_net macvtap macvlan tun kvm zfs(P)(U) zcommon(P)(U) znvpair(P)(U) spl(U) zlib_deflate zavl(P)(U) zunicode(P)(U) sg iTCO_wdt iTCO_vendor_support ses enclosure sd_mod crc_t10dif ipmi_devintf ipmi_si ipmi_msghandler sb_edac edac_core wmi lpc_ich mfd_core ahci i2c_i801 isci libsas ioatdma mpt2sas scsi_transport_sas raid_class ipv6 nfs lockd fscache auth_rpcgss nfs_acl sunrpc mlx4_en mlx4_core igb dca i2c_algo_bit i2c_core ptp pps_core [last unloaded: cpufreq_ondemand]
2015-06-24 12:35:07
2015-06-24 12:35:07 Pid: 68300, comm: lctl Tainted: P           ---------------    2.6.32-504.16.2.1chaos.ch5.3.x86_64 #1 Intel Corporation S2600GZ/S2600GZ
2015-06-24 12:35:07 RIP: 0010:[<ffffffff8105d0e2>]  [<ffffffff8105d0e2>] task_rq_lock+0x42/0xa0
2015-06-24 12:35:07 RSP: 0018:ffff880fd61f37c8  EFLAGS: 00010082
2015-06-24 12:35:07 RAX: 0000000000000282 RBX: 00000000000158c0 RCX: ffff880fe291ac78
2015-06-24 12:35:07 RDX: 0000000000000282 RSI: ffff880fd61f3820 RDI: 0000000000000000
2015-06-24 12:35:07 RBP: ffff880fd61f37e8 R08: 0000000000000c0e R09: 0000000000000000
2015-06-24 12:35:07 R10: 0000000000000001 R11: 000000000000000f R12: 0000000000000000
2015-06-24 12:35:07 R13: ffff880fd61f3820 R14: 0000000000000000 R15: 000000000000000f
2015-06-24 12:35:07 FS:  00002aaaabaebb20(0000) GS:ffff880060600000(0000) knlGS:0000000000000000
2015-06-24 12:35:07 CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
2015-06-24 12:35:07 CR2: 0000000000000008 CR3: 0000000fd61e9000 CR4: 00000000000407f0
2015-06-24 12:35:07 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
2015-06-24 12:35:07 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
2015-06-24 12:35:07 Process lctl (pid: 68300, threadinfo ffff880fd61f2000, task ffff8810260f1520)
2015-06-24 12:35:07 Stack:
2015-06-24 12:35:07  0000000000000000 ffff880fe0aa8ea0 0000000000000000 0000000000000000
2015-06-24 12:35:07 <d> ffff880fd61f3858 ffffffff81065a3c ffff880fd61f3818 ffffffffa12dd263
2015-06-24 12:35:07 <d> ffff880fd24e5a70 ffff880fd4ebbc78 ffff880fd61f3898 0000000000000282
2015-06-24 12:35:07 Call Trace:
2015-06-24 12:35:07  [<ffffffff81065a3c>] try_to_wake_up+0x3c/0x3e0
2015-06-24 12:35:07  [<ffffffffa12dd263>] ? echo_object_free+0x2b3/0x460 [obdecho]
2015-06-24 12:35:07  [<ffffffff81065e35>] wake_up_process+0x15/0x20
2015-06-24 12:35:07  [<ffffffff8152efb2>] __mutex_unlock_slowpath+0x42/0x60
2015-06-24 12:35:07  [<ffffffff8152ef2b>] mutex_unlock+0x1b/0x20
2015-06-24 12:35:07  [<ffffffffa0968051>] lu_site_purge+0x411/0x500 [obdclass]
2015-06-24 12:35:07  [<ffffffffa0968581>] lu_object_limit+0x71/0x80 [obdclass]
2015-06-24 12:35:07  [<ffffffffa09686c0>] lu_object_find_try+0x130/0x260 [obdclass]
2015-06-24 12:35:07  [<ffffffffa09688a1>] lu_object_find_at+0xb1/0xe0 [obdclass]
2015-06-24 12:35:07  [<ffffffffa07bd2b8>] ? libcfs_log_return+0x28/0x40 [libcfs]
2015-06-24 12:35:07  [<ffffffffa12292f1>] ? mdd_lookup+0x111/0x180 [mdd]
2015-06-24 12:35:07  [<ffffffffa12dea33>] echo_md_create_internal+0x153/0x640 [obdecho]
2015-06-24 12:35:07  [<ffffffffa12e8bb2>] echo_md_handler+0x1302/0x1860 [obdecho]
2015-06-24 12:35:07  [<ffffffffa12ea98c>] echo_client_iocontrol+0x187c/0x29e0 [obdecho]
2015-06-24 12:35:07  [<ffffffff8113ca91>] ? lru_cache_add_lru+0x21/0x40
2015-06-24 12:35:07  [<ffffffff8115b2fd>] ? page_add_new_anon_rmap+0x9d/0xf0
2015-06-24 12:35:07  [<ffffffff81176e8c>] ? __kmalloc+0x22c/0x240
2015-06-24 12:35:07  [<ffffffffa093131c>] class_handle_ioctl+0x165c/0x21e0 [obdclass]
2015-06-24 12:35:07  [<ffffffffa09182ab>] obd_class_ioctl+0x4b/0x190 [obdclass]
2015-06-24 12:35:07  [<ffffffff811a5882>] vfs_ioctl+0x22/0xa0
2015-06-24 12:35:07  [<ffffffff811a5ea4>] do_vfs_ioctl+0x84/0x5e0
2015-06-24 12:35:07  [<ffffffff811a6481>] sys_ioctl+0x81/0xa0
2015-06-24 12:35:07  [<ffffffff8100b0b2>] system_call_fastpath+0x16/0x1b
2015-06-24 12:35:07 Code: 89 74 24 18 0f 1f 44 00 00 48 c7 c3 c0 58 01 00 49 89 fc 49 89 f5 9c 58 0f 1f 44 00 00 48 89 c2 fa 66 0f 1f 44 00 00 49 89 55 00 <49> 8b 44 24 08 49 89 de 8b 40 18 4c 03 34 c5 60 0c c0 81 4c 89
2015-06-24 12:35:07 RIP  [<ffffffff8105d0e2>] task_rq_lock+0x42/0xa0
2015-06-24 12:35:07  RSP <ffff880fd61f37c8>
2015-06-24 12:35:07 CR2: 0000000000000008
Comment by Peter Jones [ 26/Jun/15 ]

Lai

Could you please advise on this ticket?

Thanks

Peter

Comment by Lai Siyao [ 03/Jul/15 ]

This looks to be a memory corruption, I'll try to reproduce and understand more.

Comment by Olaf Faaland [ 15/Jul/15 ]

Lai,

Any update?

thanks,
Olaf

Comment by Olaf Faaland [ 16/Jul/15 ]

I find mds-survey runs successfully at earlier commits, e.g.

6d8c562 LU-3181 mdt: mdt_cross_open ...
0041b39 LU-4735 lbuild: Build Xeon Phi ...
e15e92d LU-2675 lmv: remove liblustre ...

I'm bisecting now, hope to finish today.

Comment by Olaf Faaland [ 16/Jul/15 ]

Bisecting indicates the issue was introduced with this commit. I'll run a few times with the commit prior to double-check and post here when I've confirmed:

[bc34babc1765f6f99220256e96ce5dc5bb390676] LU-5331 obdclass: serialize lu_site purge

Comment by Olaf Faaland [ 17/Jul/15 ]

I see at least one flaw in lu_site_purge().

CFS_INIT_LIST_HEAD(&dispose);

occurs before ls_purge_mutex is taken, outside the critical section. So one thread could call CFS_INIT_LIST_HEAD while another thread is adding entries to dispose via

cfs_list_move(&h->loh_lru, &dispose);

I'm not sure there aren't other issues, but I'll submit a patch for that much.

Comment by Olaf Faaland [ 17/Jul/15 ]

Nope, I was wrong. dispose is local. Looking further.

Comment by Olaf Faaland [ 17/Jul/15 ]

I verified that I reliably encounter the crash when I run mds-survey on lustre built from

bc34bab LU-5331 obdclass: serialize lu_site purge

and reliably run mds-survey successfully with lustre built from the prior commit,

6f104f LU-5061 obd: add rnb_ prefix to struct niobuf_remote members

Comment by Peter Jones [ 17/Jul/15 ]

Nice detective work Olaf! Lai, any suggestions as how to fix tis issue?

Comment by Peter Jones [ 17/Jul/15 ]

Lai is on vacation so could you please advise Niu?

Comment by Olaf Faaland [ 19/Jul/15 ]

I also see output from the list_debug code in the kernel, in the console log:

------------[ cut here ]------------
WARNING: at lib/list_debug.c:30 __list_add+0x8f/0xa0() (Tainted: P           ---------------   )
Hardware name: KVM
list_add corruption. prev->next should be next (ffff880032f2d2a8), but was ffff88000f2f8a70. (prev=ffff88000f2f8a70).
Modules linked in: lustre(U) ofd(U) osp(U) lod(U) ost(U) mdt(U) mdd(U) mgs(U) nodemap(U) osd_zfs(U) lquota(U) lfsck(U) jbd obdecho(U) mgc(U) lov(U) osc(U) mdc(U) lmv(U) fid(U) fld(U) ptlrpc(U) obdclass(U) ksocklnd(U) lnet(U) sha512_generic sha256_generic libcfs(U) ebtable_nat ebtables xt_CHECKSUM iptable_mangle ipt_addrtype xt_conntrack ipt_MASQUERADE iptable_nat nf_nat bridge stp llc dm_thin_pool dm_bio_prison dm_persistent_data dm_bufio libcrc32c autofs4 ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 zfs(P)(U) zcommon(P)(U) znvpair(P)(U) zavl(P)(U) zunicode(P)(U) spl(U) zlib_deflate vhost_net macvtap macvlan tun virtio_balloon virtio_net i2c_piix4 i2c_core sg ext4 jbd2 mbcache virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]
Pid: 18354, comm: lctl Tainted: P           ---------------    2.6.32-431.20.3.1chaos.ch5.2.x86_64 #1
Call Trace:
 [<ffffffff81071d87>] ? warn_slowpath_common+0x87/0xc0
 [<ffffffff81071e76>] ? warn_slowpath_fmt+0x46/0x50
 [<ffffffff8129729f>] ? __list_add+0x8f/0xa0
 [<ffffffff8152ccbf>] ? __mutex_lock_slowpath+0xcf/0x180
 [<ffffffff8152af79>] ? printk+0x41/0x48
 [<ffffffff8152cbce>] ? mutex_lock+0x3e/0x60
 [<ffffffffa060895c>] ? lu_site_purge+0xac/0x550 [obdclass]
 [<ffffffffa0609241>] ? lu_object_limit+0x71/0x80 [obdclass]
 [<ffffffffa0609414>] ? lu_object_find_at+0x1c4/0x360 [obdclass]
 [<ffffffffa0dc2b05>] ? lod_index_lookup+0x25/0x30 [lod]
 [<ffffffffa0c037a1>] ? osd_attr_get+0x121/0x1e0 [osd_zfs]
 [<ffffffffa0adfea3>] ? echo_md_create_internal+0x153/0x640 [obdecho]
 [<ffffffffa0ae89f5>] ? echo_md_handler+0x1225/0x1900 [obdecho]
 [<ffffffffa0aed164>] ? echo_client_iocontrol+0x24a4/0x30e0 [obdecho]
 [<ffffffff8128f146>] ? vsnprintf+0x336/0x5e0
 [<ffffffffa04bc27b>] ? cfs_set_ptldebug_header+0x2b/0xc0 [libcfs]
 [<ffffffff811702ec>] ? __kmalloc+0x22c/0x240
 [<ffffffffa04ccfe1>] ? libcfs_debug_msg+0x41/0x50 [libcfs]
 [<ffffffffa05cd47c>] ? class_handle_ioctl+0x125c/0x1e10 [obdclass]
 [<ffffffffa05b42ab>] ? obd_class_ioctl+0x4b/0x190 [obdclass]
 [<ffffffff8119f1f2>] ? vfs_ioctl+0x22/0xa0
 [<ffffffff8103f9d8>] ? pvclock_clocksource_read+0x58/0xd0
 [<ffffffff8119f814>] ? do_vfs_ioctl+0x84/0x5e0
 [<ffffffff8103ea6c>] ? kvm_clock_read+0x1c/0x20
 [<ffffffff8103ea79>] ? kvm_clock_get_cycles+0x9/0x10
 [<ffffffff810a66f7>] ? getnstimeofday+0x57/0xe0
 [<ffffffff8119fdf1>] ? sys_ioctl+0x81/0xa0
 [<ffffffff810e20de>] ? __audit_syscall_exit+0x25e/0x290
 [<ffffffff8100b0b2>] ? system_call_fastpath+0x16/0x1b
---[ end trace 246d1f5db30ecb0d ]---
Comment by Niu Yawei (Inactive) [ 20/Jul/15 ]

I found something super suspicious in the echo_client, in echo_device_alloc():

                /* For MD echo client, it will use the site in MDS stack */
                ed->ed_site_myself.cs_lu = *ls;
                ed->ed_site = &ed->ed_site_myself;
                ed->ed_cl.cd_lu_dev.ld_site = &ed->ed_site_myself.cs_lu;

We copied the lu_site of MDS to ed_site_myself, so a ls_purge_mutex is copied as well... I'm not sure the purpose of this piece of code, apparently we should just set the ed_site to point to the MDS lu_site.

The code is from

commit 9f55850b884cac1c7bbde6d3b02764b712a2921f
Author: wangdi <di.wang@whamcloud.com>
Date:   Wed Nov 16 14:55:23 2011 -0800

    LU-593 obdclass: echo client for MDS stack

    1. Add interfaces and tools for exercising a local MDT
       device for performance reasons, in a similar manner
       to obdfilter-survey.
    2. add test_create, test_mkdir, test_lookup, test_destroy,
       test_rmdir, test_setxattr, test_md_getattr in lctl for
       md echo client test.

    Signed-off-by: Wang di <di.wang@whamcloud.com>
    Change-Id: Ibf774a567820ff36b3624e44371c63a9428d82a5
    Reviewed-on: http://review.whamcloud.com/1287
    Tested-by: Hudson
    Reviewed-by: Fan Yong <yong.fan@whamcloud.com>
    Tested-by: Maloo <whamcloud.maloo@gmail.com>
    Reviewed-by: Oleg Drokin <green@whamcloud.com>

Di, could you take a look? Is it ok to just set the pointer (ed_site) and don't copy the lu_site here?

Comment by Olaf Faaland [ 20/Jul/15 ]

Niu,

I think you're right that the code you found in echo_device_alloc() is incorrect.

The kernel's Documentation/mutex-design.txt says:

   * - a mutex object must not be initialized via memset or copying

I haven't yet figured out what the mutex depends on that makes this bad, but I did look at echo_client.c and lu_site_init() is not called nor is ls_purge_mutex initialized directly via mutex_init().

I'll explicitly initialize the mutex as a test and see what happens in a few minutes.

Comment by Olaf Faaland [ 20/Jul/15 ]

Niu,

I made two successful passes through 100,000 file cycle of mds-survey successfully with the patch to initialize ed_site_myself.cs_lu.ls_purge_mutex. Without the patch, my VM crashes before completing even one cycle.

diff --git a/lustre/obdecho/echo_client.c b/lustre/obdecho/echo_client.c
index 8b1a526..7d18f0f 100644
--- a/lustre/obdecho/echo_client.c
+++ b/lustre/obdecho/echo_client.c
@@ -857,6 +857,7 @@ static struct lu_device *echo_device_alloc(const struct lu_env *env,
                 next = ld;
                 /* For MD echo client, it will use the site in MDS stack */
                 ed->ed_site_myself.cs_lu = *ls;
+                mutex_init(&ed->ed_site_myself.cs_lu.ls_purge_mutex);
                 ed->ed_site = &ed->ed_site_myself;
                 ed->ed_cl.cd_lu_dev.ld_site = &ed->ed_site_myself.cs_lu;
                rc = echo_fid_init(ed, obd->obd_name, lu_site2seq(ls));

This isn't necessarily the proper fix, but I think supports your suspicion.

Comment by Gerrit Updater [ 21/Jul/15 ]

Olaf Faaland-LLNL (faaland1@llnl.gov) uploaded a new patch: http://review.whamcloud.com/15657
Subject: LU-6765 obdecho: initialize cs_lu.ls_purge_mutex
Project: fs/lustre-release
Branch: master
Current Patch Set: 1
Commit: d2b6aaa2b0d712c495f45d54f927ea228ba019f2

Comment by Olaf Faaland [ 21/Jul/15 ]

Niu,

The patch I uploaded above is not intended as the actual fix, it's there so I can refer to it for a project I'm working on. You can disregard it.

thanks,
Olaf

Comment by Niu Yawei (Inactive) [ 21/Jul/15 ]

This isn't necessarily the proper fix, but I think supports your suspicion.

Right, the fix apparently is to just set ed_site to 'ls', I'd ask Di to confirm it.

Olaf, thank you for posting a fix, I'll review it soon.

Comment by Gerrit Updater [ 11/Aug/15 ]

Oleg Drokin (oleg.drokin@intel.com) merged in patch http://review.whamcloud.com/15657/
Subject: LU-6765 obdecho: don't copy lu_site
Project: fs/lustre-release
Branch: master
Current Patch Set:
Commit: c45c8ad26004a577dd7ad4270f2756e1f2943639

Comment by Niu Yawei (Inactive) [ 11/Aug/15 ]

landed for 2.8

Generated at Sat Feb 10 02:03:03 UTC 2024 using Jira 9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c.