<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:42:53 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-4455] GPF crash on Lustre client acting as NFS server</title>
                <link>https://jira.whamcloud.com/browse/LU-4455</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Cluster head node acts as Lustre client, and re-exports Lustre mountpoint via NFS to its compute node NFS clients.&lt;/p&gt;

&lt;p&gt;When applications start using NFS with significant I/O (gigabit ethernet is sufficient), head node NFS services hang, and eventually the head node crashes with a GPF panic.  There are messages on the system console and in &quot;dmesg&quot; about both Lustre and nfsd issues (see attached vmcore-dmesg.txt file).&lt;/p&gt;</description>
                <environment>CentOS 6.4, kernel 2.6.32-358.18.1.el6.x86_64, Lustre client acting as NFS server for 2-node cluster.</environment>
        <key id="22664">LU-4455</key>
            <summary>GPF crash on Lustre client acting as NFS server</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="4">Incomplete</resolution>
                                        <assignee username="bobijam">Zhenyu Xu</assignee>
                                    <reporter username="hakanson">Marion Hakanson</reporter>
                        <labels>
                    </labels>
                <created>Wed, 8 Jan 2014 03:30:48 +0000</created>
                <updated>Wed, 5 Aug 2020 20:56:55 +0000</updated>
                            <resolved>Wed, 5 Aug 2020 20:56:55 +0000</resolved>
                                    <version>Lustre 2.4.1</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="74573" author="keith" created="Wed, 8 Jan 2014 17:49:57 +0000"  >&lt;p&gt;These Messages are indicative of an NFS issue:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;4&amp;gt;nfsd: peername failed (err 107)!
&amp;lt;4&amp;gt;nfsd: peername failed (err 107)!
&amp;lt;4&amp;gt;nfsd: peername failed (err 107)!
&amp;lt;4&amp;gt;nfsd: peername failed (err 107)!
&amp;lt;4&amp;gt;nfsd: peername failed (err 107)!
&amp;lt;4&amp;gt;nfsd: peername failed (err 107)!
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;They seem to cause a bad chain of events that leads to this GPF:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;3&amp;gt;LustreError: 18503:0:(mdc_locks.c:840:mdc_enqueue()) ldlm_cli_enqueue: -116
&amp;lt;3&amp;gt;LustreError: 18503:0:(vvp_io.c:1230:vvp_io_init()) lustre1: refresh file layout [0x200000406:0x15c1b:0x0] error -116.
&amp;lt;3&amp;gt;LustreError: 11-0: lustre1-MDT0000-mdc-ffff883ff68fe000: Communicating with 192.168.5.111@o2ib, operation ldlm_enqueue failed with -116.
&amp;lt;3&amp;gt;LustreError: 18503:0:(mdc_locks.c:840:mdc_enqueue()) ldlm_cli_enqueue: -116
&amp;lt;3&amp;gt;LustreError: 18503:0:(vvp_io.c:1230:vvp_io_init()) lustre1: refresh file layout [0x200000406:0x1b5c0:0x0] error -116.
&amp;lt;4&amp;gt;general protection fault: 0000 [#1] SMP 
&amp;lt;4&amp;gt;last sysfs file: /sys/devices/pci0000:ff/0000:ff:1e.7/irq
&amp;lt;4&amp;gt;CPU 4 
&amp;lt;4&amp;gt;Modules linked in: lmv(U) mgc(U) lustre(U) lov(U) osc(U) mdc(U) fid(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) sha512_generic sha256_generic crc32c_intel libcfs(U) nfsd exportfs autofs4 nfs lockd fscache auth_rpcgss nfs_acl sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf bonding 8021q garp stp llc ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables rdma_ucm rdma_cm iw_cm ib_addr ib_ipoib ib_cm ipv6 ib_uverbs ib_umad iw_nes libcrc32c iw_cxgb4 cxgb4 iw_cxgb3 cxgb3 ib_qib mlx4_en mlx4_ib ib_sa ib_mthca ib_mad ib_core raid1 mlx4_core ixgbe mdio igb dca ptp pps_core microcode sg i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support shpchp ext4 jbd2 mbcache isci libsas scsi_transport_sas sr_mod cdrom sd_mod crc_t10dif ahci wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
&amp;lt;4&amp;gt;
&amp;lt;4&amp;gt;Pid: 16743, comm: nfsd Not tainted 2.6.32-358.18.1.el6.x86_64 #1 Intel Corporation S2600GZ/S2600GZ
&amp;lt;4&amp;gt;RIP: 0010:[&amp;lt;ffffffff8105690c&amp;gt;]  [&amp;lt;ffffffff8105690c&amp;gt;] update_curr+0x14c/0x1f0
&amp;lt;4&amp;gt;RSP: 0018:ffff880099a83db8  EFLAGS: 00010092
&amp;lt;4&amp;gt;RAX: ffff8840035baaa0 RBX: 0000000000013200 RCX: ffff88201ff12240
&amp;lt;4&amp;gt;RDX: ccccccccccce5fa4 RSI: 0000000000000000 RDI: ffff8840035baad8
&amp;lt;4&amp;gt;RBP: ffff880099a83de8 R08: ffffffff8160bb65 R09: 0000000000000000
&amp;lt;4&amp;gt;R10: 0000000000000010 R11: 0000000000000000 R12: ffff880099a96768
&amp;lt;4&amp;gt;R13: 00000000000f41c8 R14: 0001345e1adea991 R15: ffff8840035baaa0
&amp;lt;4&amp;gt;FS:  0000000000000000(0000) GS:ffff880099a80000(0000) knlGS:0000000000000000
&amp;lt;4&amp;gt;CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
&amp;lt;4&amp;gt;CR2: 00007f5b9ec29000 CR3: 00000040111c7000 CR4: 00000000001407e0
&amp;lt;4&amp;gt;DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
&amp;lt;4&amp;gt;DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
&amp;lt;4&amp;gt;Process nfsd (pid: 16743, threadinfo ffff883f26abc000, task ffff8840035baaa0)
&amp;lt;4&amp;gt;Stack:
&amp;lt;4&amp;gt; ffff880099a83dc8 ffffffff81013873 ffff8840035baad8 ffff880099a96768
&amp;lt;4&amp;gt;&amp;lt;d&amp;gt; 0000000000000000 0000000000000000 ffff880099a83e18 ffffffff81056ebb
&amp;lt;4&amp;gt;&amp;lt;d&amp;gt; ffff880099a96700 0000000000000004 0000000000016700 0000000000000004
&amp;lt;4&amp;gt;Call Trace:
&amp;lt;4&amp;gt; &amp;lt;IRQ&amp;gt; 
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81013873&amp;gt;] ? native_sched_clock+0x13/0x80
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81056ebb&amp;gt;] task_tick_fair+0xdb/0x160
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8105ad01&amp;gt;] scheduler_tick+0xc1/0x260
&amp;lt;4&amp;gt; [&amp;lt;ffffffff810a8060&amp;gt;] ? tick_sched_timer+0x0/0xc0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff810812fe&amp;gt;] update_process_times+0x6e/0x90
&amp;lt;4&amp;gt; [&amp;lt;ffffffff810a80c6&amp;gt;] tick_sched_timer+0x66/0xc0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109b4ae&amp;gt;] __run_hrtimer+0x8e/0x1a0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff810a219f&amp;gt;] ? ktime_get_update_offsets+0x4f/0xd0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8107710f&amp;gt;] ? __do_softirq+0x11f/0x1e0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109b816&amp;gt;] hrtimer_interrupt+0xe6/0x260
&amp;lt;4&amp;gt; [&amp;lt;ffffffff815177cb&amp;gt;] smp_apic_timer_interrupt+0x6b/0x9b
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100bb93&amp;gt;] apic_timer_interrupt+0x13/0x20
&amp;lt;4&amp;gt; &amp;lt;EOI&amp;gt; 
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa083774c&amp;gt;] ? cl_page_io_start+0xc/0x130 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08386be&amp;gt;] cl_page_prep+0x19e/0x210 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0b68057&amp;gt;] ? osc_page_transfer_add+0x77/0xb0 [osc]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0b6d144&amp;gt;] osc_io_submit+0x194/0x4a0 [osc]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08426dc&amp;gt;] cl_io_submit_rw+0x6c/0x160 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0c02191&amp;gt;] lov_io_submit+0x351/0xbc0 [lov]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa08426dc&amp;gt;] cl_io_submit_rw+0x6c/0x160 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0844cfe&amp;gt;] cl_io_read_page+0xae/0x170 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0838ac7&amp;gt;] ? cl_page_assume+0xf7/0x220 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0ca8056&amp;gt;] ll_readpage+0x96/0x1a0 [lustre]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b0cf8&amp;gt;] __generic_file_splice_read+0x3a8/0x560
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06fd977&amp;gt;] ? cfs_hash_bd_lookup_intent+0x37/0x130 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06fd977&amp;gt;] ? cfs_hash_bd_lookup_intent+0x37/0x130 [libcfs]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa083c71b&amp;gt;] ? cl_lock_fits_into+0x6b/0x190 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0bfc8cf&amp;gt;] ? lov_lock_fits_into+0x3ef/0x540 [lov]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa083c6a1&amp;gt;] ? cl_lock_mutex_tail+0x51/0x60 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0833825&amp;gt;] ? cl_env_info+0x15/0x20 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8109ca9f&amp;gt;] ? up+0x2f/0x50
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81075887&amp;gt;] ? current_fs_time+0x27/0x30
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811aeb90&amp;gt;] ? spd_release_page+0x0/0x20
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811b0efa&amp;gt;] generic_file_splice_read+0x4a/0x90
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0cd52a5&amp;gt;] vvp_io_read_start+0x3c5/0x470 [lustre]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa084283a&amp;gt;] cl_io_start+0x6a/0x140 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0846f74&amp;gt;] cl_io_loop+0xb4/0x1b0 [obdclass]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0c7b8cf&amp;gt;] ll_file_io_generic+0x33f/0x600 [lustre]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa0c7c2e0&amp;gt;] ll_file_splice_read+0xb0/0x1d0 [lustre]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811af15b&amp;gt;] do_splice_to+0x6b/0xa0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff811af45f&amp;gt;] splice_direct_to_actor+0xaf/0x1c0
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06a03b0&amp;gt;] ? nfsd_direct_splice_actor+0x0/0x20 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06a0e70&amp;gt;] nfsd_vfs_read+0x1a0/0x1c0 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06a24a0&amp;gt;] nfsd_read_file+0x90/0xb0 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06b60ef&amp;gt;] nfsd4_encode_read+0x13f/0x240 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06bbd46&amp;gt;] ? nfs4_preprocess_stateid_op+0x1f6/0x310 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06afd35&amp;gt;] nfsd4_encode_operation+0x75/0x180 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa06add35&amp;gt;] nfsd4_proc_compound+0x195/0x490 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa069b43e&amp;gt;] nfsd_dispatch+0xfe/0x240 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa057c614&amp;gt;] svc_process_common+0x344/0x640 [sunrpc]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81063410&amp;gt;] ? default_wake_function+0x0/0x20
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa057cc50&amp;gt;] svc_process+0x110/0x160 [sunrpc]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa069bb62&amp;gt;] nfsd+0xc2/0x160 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffffa069baa0&amp;gt;] ? nfsd+0x0/0x160 [nfsd]
&amp;lt;4&amp;gt; [&amp;lt;ffffffff81096a36&amp;gt;] kthread+0x96/0xa0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100c0ca&amp;gt;] child_rip+0xa/0x20
&amp;lt;4&amp;gt; [&amp;lt;ffffffff810969a0&amp;gt;] ? kthread+0x0/0xa0
&amp;lt;4&amp;gt; [&amp;lt;ffffffff8100c0c0&amp;gt;] ? child_rip+0x0/0x20
&amp;lt;4&amp;gt;Code: d2 74 34 48 8b 50 08 8b 5a 18 48 8b 90 10 09 00 00 48 8b 4a 50 48 85 c9 74 1d 48 63 db 66 90 48 8b 51 20 48 03 14 dd a0 de bf 81 &amp;lt;4c&amp;gt; 01 2a 48 8b 49 78 48 85 c9 75 e8 48 8b 98 68 07 00 00 48 85 
&amp;lt;1&amp;gt;RIP  [&amp;lt;ffffffff8105690c&amp;gt;] update_curr+0x14c/0x1f0
&amp;lt;4&amp;gt; RSP &amp;lt;ffff880099a83db8&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="74577" author="adilger" created="Wed, 8 Jan 2014 18:22:25 +0000"  >&lt;p&gt;It looks like this is a very deep stack in the NFSd-&amp;gt;Lustre IO path, and then there is an interrupt and it overflows the stack and crashes.  In particular, __generic_file_splice_read() is consuming about 400 bytes of stack by itself, most of it in &lt;tt&gt;*pages&lt;/tt&gt; and &lt;tt&gt;partial&lt;/tt&gt;, which is bad.&lt;/p&gt;

&lt;p&gt;Separately, it would be good to quiet the console messages for -ESTALE:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;&amp;lt;3&amp;gt;LustreError: 16799:0:(file.c:2716:ll_inode_revalidate_fini()) lustre1: revalidate FID [0x20000057e:0x76:0x0] error: rc = -116
&amp;lt;3&amp;gt;LustreError: 11-0: lustre1-MDT0000-mdc-ffff883ff68fe000: Communicating with 192.168.5.111@o2ib, operation ldlm_enqueue failed with -116.
&amp;lt;3&amp;gt;LustreError: 18503:0:(mdc_locks.c:840:mdc_enqueue()) ldlm_cli_enqueue: -116
&amp;lt;3&amp;gt;LustreError: 18503:0:(vvp_io.c:1230:vvp_io_init()) lustre1: refresh file layout [0x200000405:0x13c34:0x0] error -116.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That is probably a result of NFS not being cache coherent and a file getting deleted while it is undergoing IO.&lt;/p&gt;</comment>
                            <comment id="74712" author="hakanson" created="Thu, 9 Jan 2014 23:35:57 +0000"  >&lt;p&gt;New crash.  Previous crash was with nfsd thread count at 8, while this crash was with the thread count upped to 128.  We have been advised to try 512, so that setting is in place going forward.&lt;/p&gt;</comment>
                            <comment id="74713" author="keith" created="Fri, 10 Jan 2014 00:02:31 +0000"  >&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;4&amp;gt;NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory   &amp;lt;================================ NFS starts? 
&amp;lt;6&amp;gt;NFSD: starting 90-second grace period
&amp;lt;6&amp;gt;certmonger[5563]: segfault at 0 ip 0000003493527a96 sp 00007ffff778cf78 error 4 in libc-2.12.so[3493400000+18a000]
&amp;lt;5&amp;gt;Bridge firewalling registered
&amp;lt;6&amp;gt;LNet: HW CPU cores: 48, npartitions: 8
&amp;lt;6&amp;gt;alg: No test for crc32 (crc32-table)
&amp;lt;6&amp;gt;alg: No test for adler32 (adler32-zlib)
&amp;lt;6&amp;gt;alg: No test for crc32 (crc32-pclmul)
&amp;lt;5&amp;gt;padlock: VIA PadLock Hash Engine not detected.
&amp;lt;6&amp;gt;Lustre: Lustre: Build Version: 2.4.1-RC2--PRISTINE-2.6.32-358.18.1.el6.x86_64
&amp;lt;6&amp;gt;LNet: Added LNI 192.168.5.120@o2ib [8/512/0/180]
&amp;lt;6&amp;gt;Lustre: Layout lock feature supported.
&amp;lt;4&amp;gt;Lustre: Mounted lustre1-client            &amp;lt;=========================   NFS Starts
&amp;lt;4&amp;gt;nfsd: last server has exited, flushing export cache
&amp;lt;4&amp;gt;NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
&amp;lt;6&amp;gt;NFSD: starting 90-second grace period
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This 2nd crash id very different.  Do you hit this on start up of after some time? &lt;/p&gt;

&lt;p&gt;In general I would wait to start nfsd until after Lustre has been mounted. &lt;/p&gt;

</comment>
                            <comment id="74715" author="hakanson" created="Fri, 10 Jan 2014 00:44:35 +0000"  >&lt;p&gt;Both crashes occurred after being up for some time (12+ hours).  Seems to happen after some period of steady I/O load from the NFS clients.&lt;/p&gt;

&lt;p&gt;Currently the Lustre mount does not happen automatically (neither on the head node nor for that path on the NFS clients).  I.e. none of the NFS clients are mounting that path until after someone manually mounts the Lustre path on the head node.&lt;/p&gt;

&lt;p&gt;By the way, my earlier comment about this most recent crash having happened with 128 nfsd threads active was mistaken.  There was another crash yesterday that I was unaware of until examining &quot;last&quot; logs, so today&apos;s crash was with 512 nfsd threads active (in case it makes a difference).&lt;/p&gt;</comment>
                            <comment id="78895" author="pjones" created="Mon, 10 Mar 2014 17:25:52 +0000"  >&lt;p&gt;Bobijam&lt;/p&gt;

&lt;p&gt;Could you please advise on this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="79370" author="keith" created="Fri, 14 Mar 2014 20:11:07 +0000"  >&lt;p&gt;It has been reported that &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3952&quot; title=&quot;llite_nfs.c:349:ll_get_parent()) ASSERTION( body-&amp;gt;valid &amp;amp; (0x00000001ULL) ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3952&quot;&gt;&lt;del&gt;LU-3952&lt;/del&gt;&lt;/a&gt; is related and may address this issue for Lustre 2.4 branches. &lt;/p&gt;</comment>
                            <comment id="82603" author="pjones" created="Mon, 28 Apr 2014 14:19:53 +0000"  >&lt;p&gt;Marion&lt;/p&gt;

&lt;p&gt;Have you been able to check whether this issue still exists with the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-3952&quot; title=&quot;llite_nfs.c:349:ll_get_parent()) ASSERTION( body-&amp;gt;valid &amp;amp; (0x00000001ULL) ) failed&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-3952&quot;&gt;&lt;del&gt;LU-3952&lt;/del&gt;&lt;/a&gt; fix in place?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="82690" author="hakanson" created="Mon, 28 Apr 2014 21:37:26 +0000"  >&lt;p&gt;Peter,&lt;/p&gt;

&lt;p&gt;No, we haven&apos;t arranged a downtime of the cluster to test that yet.&lt;/p&gt;

&lt;p&gt;Marion&lt;/p&gt;</comment>
                            <comment id="276753" author="adilger" created="Wed, 5 Aug 2020 20:56:55 +0000"  >&lt;p&gt;Close old ticket that has not been seen recently.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="13967" name="vmcore-dmesg-20140109.txt" size="120569" author="hakanson" created="Thu, 9 Jan 2014 23:35:57 +0000"/>
                            <attachment id="13964" name="vmcore-dmesg.txt" size="138086" author="hakanson" created="Wed, 8 Jan 2014 03:30:48 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10490" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>End date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 5 May 2014 03:30:48 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzwce7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>12211</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_10493" key="com.atlassian.jira.plugin.system.customfieldtypes:datepicker">
                        <customfieldname>Start date</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 8 Jan 2014 03:30:48 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>