<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:30:17 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-3021] replay-vbr test 11a: RIP: ldiskfs_mb_release_inode_pa+0x346/0x360 [ldiskfs]</title>
                <link>https://jira.whamcloud.com/browse/LU-3021</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;While running replay-vbr test 11a, unmounting the MDS hung and the following errors occurred on MDS:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;LDISKFS-fs error (device sdc5): ldiskfs_mb_release_inode_pa: pa free mismatch: [pa ffff8804165f3a58] [phy 77568] [logic 0] [len 2048] [free 2047] [error 0] [inode 13] [freed 2048]
Aborting journal on device sdc5-8.
Write to readonly device sdc (0x800025) bi_flags: f000000000000001, bi_vcnt: 1, bi_idx: 0, bi-&amp;gt;size: 4096, bi_cnt: 2, bi_private: ffff880216d0b9b8
LDISKFS-fs (sdc5): Remounting filesystem read-only
Write to readonly device sdc (0x800025) bi_flags: f000000000000001, bi_vcnt: 1, bi_idx: 0, bi-&amp;gt;size: 4096, bi_cnt: 2, bi_private: ffff88020ed844d8
LDISKFS-fs error (device sdc5): ldiskfs_mb_release_inode_pa: free 2048, pa_free 2047
------------[ cut here ]------------
kernel BUG at /var/lib/jenkins/workspace/lustre-b2_1/arch/x86_64/build_type/server/distro/el6/ib_stack/ofa/BUILD/BUILD/lustre-ldiskfs-3.3.0/ldiskfs/mballoc.c:3789!
invalid opcode: 0000 [#1] SMP 
last sysfs file: /sys/devices/pci0000:00/0000:00:14.4/0000:01:04.0/local_cpus
CPU 0 
Modules linked in: cmm(U) osd_ldiskfs(U) mdt(U) mdd(U) mds(U) fsfilt_ldiskfs(U) mgs(U) mgc(U) ldiskfs(U) lustre(U) lov(U) osc(U) lquota(U) mdc(U) fid(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) jbd2 nfs fscache mlx4_ib(U) mlx4_core(U) nfsd lockd nfs_acl auth_rpcgss exportfs autofs4 sunrpc cpufreq_ondemand powernow_k8 freq_table mperf ib_ipoib(U) rdma_ucm(U) ib_ucm(U) ib_uverbs(U) ib_umad(U) rdma_cm(U) ib_cm(U) iw_cm(U) ib_addr(U) ipv6 ib_sa(U) ib_mad(U) ib_core(U) igb dca microcode serio_raw k10temp amd64_edac_mod edac_core edac_mce_amd i2c_piix4 i2c_core sg shpchp ext3 jbd mbcache sd_mod crc_t10dif pata_acpi ata_generic pata_atiixp ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: libcfs]

Pid: 14217, comm: umount Not tainted 2.6.32-279.19.1.el6_lustre.x86_64 #1 Supermicro H8DGT/H8DGT
RIP: 0010:[&amp;lt;ffffffffa03c7ac6&amp;gt;]  [&amp;lt;ffffffffa03c7ac6&amp;gt;] ldiskfs_mb_release_inode_pa+0x346/0x360 [ldiskfs]
RSP: 0018:ffff88020c519a58  EFLAGS: 00010212
RAX: 00000000000007ff RBX: 0000000000000800 RCX: ffff880218f6c400
RDX: 0000000000000000 RSI: 0000000000000046 RDI: ffff8802151fc100
RBP: ffff88020c519b08 R08: ffffffff81c01a80 R09: 0000000000000000
R10: 0000000000000003 R11: 0000000000000000 R12: ffff8800ab859ef8
R13: ffff8800b84833a0 R14: 0000000000003801 R15: ffff8804165f3a58
FS:  00007fa7c0ea2740(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000003c8b873e10 CR3: 00000000a7379000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process umount (pid: 14217, threadinfo ffff88020c518000, task ffff880218df8080)
Stack:
 ffff880200000800 00000000000007ff ffff880000000000 000000000000000d
&amp;lt;d&amp;gt; 0000000000000800 0000000000000004 ffff88020c519a98 ffffffff811a8df6
&amp;lt;d&amp;gt; ffff880218f6c400 ffff88021dbe6800 ffff8804165f3a58 000000000000ff00
Call Trace:
 [&amp;lt;ffffffff811a8df6&amp;gt;] ? __wait_on_buffer+0x26/0x30
 [&amp;lt;ffffffffa03cb86e&amp;gt;] ldiskfs_discard_preallocations+0x1fe/0x490 [ldiskfs]
 [&amp;lt;ffffffffa03e3286&amp;gt;] ldiskfs_clear_inode+0x16/0x50 [ldiskfs]
 [&amp;lt;ffffffff81190c4c&amp;gt;] clear_inode+0xac/0x140
 [&amp;lt;ffffffff81190d20&amp;gt;] dispose_list+0x40/0x120
 [&amp;lt;ffffffff811911ca&amp;gt;] invalidate_inodes+0xea/0x190
 [&amp;lt;ffffffff8117859c&amp;gt;] generic_shutdown_super+0x4c/0xe0
 [&amp;lt;ffffffff81178661&amp;gt;] kill_block_super+0x31/0x50
 [&amp;lt;ffffffff81179670&amp;gt;] deactivate_super+0x70/0x90
 [&amp;lt;ffffffff811955df&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffffa0f912b4&amp;gt;] unlock_mntput+0x64/0x70 [obdclass]
 [&amp;lt;ffffffffa051b715&amp;gt;] ? cfs_waitq_init+0x15/0x20 [libcfs]
 [&amp;lt;ffffffffa0f993f3&amp;gt;] server_put_super+0x433/0x13e0 [obdclass]
 [&amp;lt;ffffffff811911d6&amp;gt;] ? invalidate_inodes+0xf6/0x190
 [&amp;lt;ffffffff811785ab&amp;gt;] generic_shutdown_super+0x5b/0xe0
 [&amp;lt;ffffffff81178696&amp;gt;] kill_anon_super+0x16/0x60
 [&amp;lt;ffffffffa0f8fa56&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
 [&amp;lt;ffffffff81179670&amp;gt;] deactivate_super+0x70/0x90
 [&amp;lt;ffffffff811955df&amp;gt;] mntput_no_expire+0xbf/0x110
 [&amp;lt;ffffffff81195f3b&amp;gt;] sys_umount+0x7b/0x3a0
 [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
Code: 55 c8 e9 39 fe ff ff 31 db 41 83 7f 4c 00 0f 84 7e fd ff ff 0f 0b eb fe 0f 0b eb fe 0f 0b 0f 1f 80 00 00 00 00 eb f7 0f 0b eb fe &amp;lt;0f&amp;gt; 0b 0f 1f 84 00 00 00 00 00 eb f6 66 66 66 66 66 2e 0f 1f 84 
RIP  [&amp;lt;ffffffffa03c7ac6&amp;gt;] ldiskfs_mb_release_inode_pa+0x346/0x360 [ldiskfs]
 RSP &amp;lt;ffff88020c519a58&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/29d0cb1e-943a-11e2-93c6-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/29d0cb1e-943a-11e2-93c6-52540035b04c&lt;/a&gt;&lt;/p&gt;</description>
                <environment>&lt;br/&gt;
Lustre Tag: v2_1_5_RC1&lt;br/&gt;
Lustre Build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/191/&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/191/&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.3/x86_64 (kernel version: 2.6.32_279.19.1.el6)&lt;br/&gt;
Network: IB (OFED 1.5.4)&lt;br/&gt;
ENABLE_QUOTA=yes&lt;br/&gt;
</environment>
        <key id="18066">LU-3021</key>
            <summary>replay-vbr test 11a: RIP: ldiskfs_mb_release_inode_pa+0x346/0x360 [ldiskfs]</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.whamcloud.com/images/icons/priorities/major.svg">Major</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="niu">Niu Yawei</assignee>
                                    <reporter username="yujian">Jian Yu</reporter>
                        <labels>
                    </labels>
                <created>Sun, 24 Mar 2013 12:40:42 +0000</created>
                <updated>Thu, 11 Jul 2013 18:36:09 +0000</updated>
                            <resolved>Thu, 11 Jul 2013 18:36:09 +0000</resolved>
                                    <version>Lustre 2.1.5</version>
                                    <fixVersion>Lustre 2.5.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>7</watches>
                                                                            <comments>
                            <comment id="54728" author="green" created="Sun, 24 Mar 2013 17:40:37 +0000"  >&lt;p&gt;This might be related to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-2228&quot; title=&quot;replay-ost-single 10: kernel BUG at mballoc.c:3784&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-2228&quot;&gt;&lt;del&gt;LU-2228&lt;/del&gt;&lt;/a&gt; that Jinshan has an abandoned patch for&lt;/p&gt;</comment>
                            <comment id="54729" author="green" created="Sun, 24 Mar 2013 18:00:46 +0000"  >&lt;p&gt;I see this was happening before too. In ORI-236 and in &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1948&quot; title=&quot;ldiskfs - MDS goes read-only (SWL)&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1948&quot;&gt;&lt;del&gt;LU-1948&lt;/del&gt;&lt;/a&gt;, so whatever is the cause, it seems to have been lurking for some time already.&lt;/p&gt;</comment>
                            <comment id="54730" author="pjones" created="Sun, 24 Mar 2013 18:22:53 +0000"  >&lt;p&gt;Niu&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;

&lt;p&gt;Yujian&lt;/p&gt;

&lt;p&gt;Could you please setup a test to run replay-vbr repeatedly for a day or two? Oleg suspects that this is an extremely rare scenario to hit and is most likely encountered during this test. Seeing whether it reoccurs and if it does how often will help prove/disprove this theory&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="54734" author="yujian" created="Mon, 25 Mar 2013 01:10:22 +0000"  >&lt;blockquote&gt;&lt;p&gt;Could you please setup a test to run replay-vbr repeatedly for a day or two? Oleg suspects that this is an extremely rare scenario to hit and is most likely encountered during this test. Seeing whether it reoccurs and if it does how often will help prove/disprove this theory&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;The whole RHEL6 OFA test run was performed manually on Toro. After hitting the issue, I ran replay-vbr 11 and the entire suite separately again, both of them passed:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/64387648-94ae-11e2-ba8c-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/64387648-94ae-11e2-ba8c-52540035b04c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sets/a38db92a-94e5-11e2-93c6-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/a38db92a-94e5-11e2-93c6-52540035b04c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The RHEL6 TCP test run performed by autotest also passed:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/d5efc436-943a-11e2-93c6-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/d5efc436-943a-11e2-93c6-52540035b04c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me run replay-vbr for more times.&lt;/p&gt;</comment>
                            <comment id="54882" author="yujian" created="Wed, 27 Mar 2013 01:37:54 +0000"  >&lt;blockquote&gt;&lt;p&gt;Let me run replay-vbr for more times.&lt;/p&gt;&lt;/blockquote&gt;

&lt;p&gt;Four more times on the same test cluster where the original issue occurred passed:&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/d3f4a208-9621-11e2-8c64-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/d3f4a208-9621-11e2-8c64-52540035b04c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/e8d5ba36-9621-11e2-8c64-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/e8d5ba36-9621-11e2-8c64-52540035b04c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/fe0bd9da-9621-11e2-8c64-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/fe0bd9da-9621-11e2-8c64-52540035b04c&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;https://maloo.whamcloud.com/test_sessions/10b29aa6-9622-11e2-8c64-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sessions/10b29aa6-9622-11e2-8c64-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="54889" author="niu" created="Wed, 27 Mar 2013 05:19:30 +0000"  >&lt;p&gt;This probably caused by we setting device read-only in the recovery test, and which result in inconsistency between data on disk &amp;amp; data in memory. If this is the case, I&apos;m not sure how to fix it so far.&lt;/p&gt;

&lt;p&gt;A warning message:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;Alloc from readonly device sdc (0x800025): [inode 13] [logic 1138] [goal 75633] [ll 0] [pl 0] [lr 0] [pr 0] [len 1] [flags 32]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;is just before the crash:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;LDISKFS-fs error (device sdc5): ldiskfs_mb_release_inode_pa: pa free mismatch: [pa ffff8804165f3a58] [phy 77568] [logic 0] [len 2048] [free 2047] [error 0] [inode 13] [freed 2048]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Looks like the device has been turned to read-only when allocating block, and only the in-memory pa_free is decrement by 1, but the on-disk bitmap wasn&apos;t updated accordingly.&lt;/p&gt;</comment>
                            <comment id="55072" author="jay" created="Fri, 29 Mar 2013 05:16:23 +0000"  >&lt;p&gt;Do you think this patch will help: &lt;a href=&quot;http://review.whamcloud.com/5883&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5883&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="55074" author="niu" created="Fri, 29 Mar 2013 06:11:06 +0000"  >&lt;blockquote&gt;
&lt;p&gt;Do you think this patch will help: &lt;a href=&quot;http://review.whamcloud.com/5883&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/5883&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I think the ldiskfs_mb_release_inode_pa() is not only called on the umount patch, and I&apos;m not sure if there is any other such inconsistency problems in the code.&lt;/p&gt;</comment>
                            <comment id="59404" author="yujian" created="Tue, 28 May 2013 07:12:49 +0000"  >&lt;p&gt;Lustre b1_8 client build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b1_8/258&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b1_8/258&lt;/a&gt;&lt;br/&gt;
Lustre b2_1 server build: &lt;a href=&quot;http://build.whamcloud.com/job/lustre-b2_1/205&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://build.whamcloud.com/job/lustre-b2_1/205&lt;/a&gt;&lt;br/&gt;
Distro/Arch: RHEL6.4/x86_64&lt;br/&gt;
Network: IB (in-kernel OFED)&lt;/p&gt;

&lt;p&gt;The same issue occurred while running replay-single test 5:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;== replay-single test 5: |x| 220 open(O_CREAT) == 09:42:59 (1369672979)
CMD: client-20-ib sync
Filesystem           1K-blocks      Used Available Use% Mounted on
client-20-ib@o2ib:/lustre
                     216622196   3183928 202566192   2% /mnt/lustre
CMD: client-20-ib /usr/sbin/lctl --device %lustre-MDT0000 notransno
CMD: client-20-ib /usr/sbin/lctl --device %lustre-MDT0000 readonly
CMD: client-20-ib /usr/sbin/lctl mark mds REPLAY BARRIER on lustre-MDT0000
Failing mds on node client-20-ib
CMD: client-20-ib grep -c /mnt/mds&apos; &apos; /proc/mounts
Stopping /mnt/mds (opts:)
CMD: client-20-ib umount -d /mnt/mds
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Console log on MDS client-20-ib:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;09:43:16:Write to readonly device dm-0 (0xfd00000) bi_flags: f000000000000001, bi_vcnt: 1, bi_idx: 0, bi-&amp;gt;size: 4096, bi_cnt: 2, bi_private: ffff88031f082e30
09:43:16:LDISKFS-fs error (device dm-0): ldiskfs_mb_release_inode_pa: pa free mismatch: [pa ffff880314d936d8] [phy 75296] [logic 2048] [len 2048] [free 2046] [error 0] [inode 13] [freed 2048]
09:43:16:Aborting journal on device dm-0-8.
09:43:16:Write to readonly device dm-0 (0xfd00000) bi_flags: f000000000000001, bi_vcnt: 1, bi_idx: 0, bi-&amp;gt;size: 4096, bi_cnt: 2, bi_private: ffff88031a3c93a0
09:43:16:LDISKFS-fs (dm-0): Remounting filesystem read-only
09:43:16:Write to readonly device dm-0 (0xfd00000) bi_flags: f000000000000001, bi_vcnt: 1, bi_idx: 0, bi-&amp;gt;size: 4096, bi_cnt: 2, bi_private: ffff88031f082c28
09:43:16:LDISKFS-fs error (device dm-0): ldiskfs_mb_release_inode_pa: free 2048, pa_free 2046
09:43:16:------------[ cut here ]------------
09:43:16:kernel BUG at /var/lib/jenkins/workspace/lustre-b2_1/arch/x86_64/build_type/server/distro/el6/ib_stack/inkernel/BUILD/BUILD/lustre-ldiskfs-3.3.0/ldiskfs/mballoc.c:3790!
09:43:16:invalid opcode: 0000 [#1] SMP 
09:43:16:last sysfs file: /sys/devices/system/cpu/possible
09:43:16:CPU 3 
09:43:16:Modules linked in: cmm(U) osd_ldiskfs(U) mdt(U) mdd(U) mds(U) fsfilt_ldiskfs(U) mgs(U) mgc(U) lustre(U) lov(U) osc(U) lquota(U) mdc(U) fid(U) fld(U) ko2iblnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) ldiskfs(U) jbd2 nfs fscache nfsd lockd nfs_acl auth_rpcgss exportfs autofs4 sunrpc cpufreq_ondemand acpi_cpufreq freq_table mperf ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_addr ipv6 mlx4_ib ib_sa ib_mad ib_core mlx4_en mlx4_core igb ptp pps_core microcode sg serio_raw i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support ioatdma dca i7core_edac edac_core shpchp ext3 jbd mbcache sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
09:43:16:
09:43:16:Pid: 19154, comm: umount Not tainted 2.6.32-358.6.2.el6_lustre.ge10a8db.x86_64 #1 Supermicro X8DTT/X8DTT
09:43:16:RIP: 0010:[&amp;lt;ffffffffa0440ac6&amp;gt;]  [&amp;lt;ffffffffa0440ac6&amp;gt;] ldiskfs_mb_release_inode_pa+0x346/0x360 [ldiskfs]
09:43:16:RSP: 0018:ffff880313259a58  EFLAGS: 00010212
09:43:16:RAX: 00000000000007fe RBX: 0000000000000800 RCX: ffff8802e489c800
09:43:16:RDX: 0000000000000000 RSI: 0000000000000046 RDI: ffff88030aac2100
09:43:16:RBP: ffff880313259b08 R08: ffffffff81c07720 R09: 0000000000000000
09:43:16:R10: 0000000000000003 R11: 0000000000000000 R12: ffff88031694f0a0
09:43:16:R13: ffff88031a3c96e0 R14: 0000000000002801 R15: ffff880314d936d8
09:43:16:FS:  00007ff184d42740(0000) GS:ffff880032e60000(0000) knlGS:0000000000000000
09:43:16:CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
09:43:16:CR2: 00000000025f6ee0 CR3: 0000000310a5b000 CR4: 00000000000007e0
09:43:16:DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
09:43:16:DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
09:43:16:Process umount (pid: 19154, threadinfo ffff880313258000, task ffff8803346d0040)
09:43:16:Stack:
09:43:16: ffff880300000800 00000000000007fe ffff880300000000 000000000000000d
09:43:16:&amp;lt;d&amp;gt; 0000000000000800 0000000000000004 ffff880313259a98 ffffffff811b5fe6
09:43:16:&amp;lt;d&amp;gt; ffff8802e489c800 ffff8802e489c400 ffff880314d936d8 0000000000010620
09:43:16:Call Trace:
09:43:16: [&amp;lt;ffffffff811b5fe6&amp;gt;] ? __wait_on_buffer+0x26/0x30
09:43:16: [&amp;lt;ffffffffa044560e&amp;gt;] ldiskfs_discard_preallocations+0x1fe/0x490 [ldiskfs]
09:43:16: [&amp;lt;ffffffffa045a476&amp;gt;] ldiskfs_clear_inode+0x16/0x50 [ldiskfs]
09:43:16: [&amp;lt;ffffffff8119cfbc&amp;gt;] clear_inode+0xac/0x140
09:43:16: [&amp;lt;ffffffff8119d090&amp;gt;] dispose_list+0x40/0x120
09:43:16: [&amp;lt;ffffffff8119d53a&amp;gt;] invalidate_inodes+0xea/0x190
09:43:16: [&amp;lt;ffffffff8118333c&amp;gt;] generic_shutdown_super+0x4c/0xe0
09:43:16: [&amp;lt;ffffffff81183401&amp;gt;] kill_block_super+0x31/0x50
09:43:16: [&amp;lt;ffffffff81183bd7&amp;gt;] deactivate_super+0x57/0x80
09:43:16: [&amp;lt;ffffffff811a1bff&amp;gt;] mntput_no_expire+0xbf/0x110
09:43:16: [&amp;lt;ffffffffa05d0314&amp;gt;] unlock_mntput+0x64/0x70 [obdclass]
09:43:16: [&amp;lt;ffffffffa04a0715&amp;gt;] ? cfs_waitq_init+0x15/0x20 [libcfs]
09:43:16: [&amp;lt;ffffffffa05d8453&amp;gt;] server_put_super+0x433/0x13e0 [obdclass]
09:43:16: [&amp;lt;ffffffff8119d546&amp;gt;] ? invalidate_inodes+0xf6/0x190
09:43:16: [&amp;lt;ffffffff8118334b&amp;gt;] generic_shutdown_super+0x5b/0xe0
09:43:16: [&amp;lt;ffffffff81183436&amp;gt;] kill_anon_super+0x16/0x60
09:43:16: [&amp;lt;ffffffffa05ceab6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
09:43:16: [&amp;lt;ffffffff81183bd7&amp;gt;] deactivate_super+0x57/0x80
09:43:16: [&amp;lt;ffffffff811a1bff&amp;gt;] mntput_no_expire+0xbf/0x110
09:43:16: [&amp;lt;ffffffff811a266b&amp;gt;] sys_umount+0x7b/0x3a0
09:43:16: [&amp;lt;ffffffff8100b072&amp;gt;] system_call_fastpath+0x16/0x1b
09:43:16:Code: 55 c8 e9 39 fe ff ff 31 db 41 83 7f 4c 00 0f 84 7e fd ff ff 0f 0b eb fe 0f 0b eb fe 0f 0b 0f 1f 80 00 00 00 00 eb f7 0f 0b eb fe &amp;lt;0f&amp;gt; 0b 0f 1f 84 00 00 00 00 00 eb f6 66 66 66 66 66 2e 0f 1f 84 
09:43:16:RIP  [&amp;lt;ffffffffa0440ac6&amp;gt;] ldiskfs_mb_release_inode_pa+0x346/0x360 [ldiskfs]
09:43:16: RSP &amp;lt;ffff880313259a58&amp;gt;
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Maloo report: &lt;a href=&quot;https://maloo.whamcloud.com/test_sets/b4592a9a-c73b-11e2-ae4e-52540035b04c&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://maloo.whamcloud.com/test_sets/b4592a9a-c73b-11e2-ae4e-52540035b04c&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="62139" author="simmonsja" created="Thu, 11 Jul 2013 18:07:41 +0000"  >&lt;p&gt;This patch has landed.&lt;/p&gt;</comment>
                            <comment id="62146" author="pjones" created="Thu, 11 Jul 2013 18:36:09 +0000"  >&lt;p&gt;Landed for 2.5&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="18981">LU-3330</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvlzz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7346</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>