<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:20:30 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1881] sanity test 116 soft lockup</title>
                <link>https://jira.whamcloud.com/browse/LU-1881</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Running with debug kernel, REFORMAT=yes SLOW=yes sh sanity.sh on latest master,&lt;br/&gt;
test 116 locks up reliably (3 out of 3 runs so far):&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[ 1402.025523] Lustre: DEBUG MARKER: == sanity test 116: stripe QOS: free space balance ===================== 21:16:44 (1347326204)
[ 1458.081053] LNet: Service thread pid 1041 was inactive &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 40.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; debugging purposes:
[ 1458.083593] Pid: 1041, comm: mdt01_003
[ 1458.084187]
[ 1458.084187] Call Trace:
[ 1458.084851]  [&amp;lt;ffffffffa0b0ed42&amp;gt;] ? mdt_handle_common+0x922/0x1740 [mdt]
[ 1458.085891]  [&amp;lt;ffffffffa0b0fc35&amp;gt;] mdt_regular_handle+0x15/0x20 [mdt]
[ 1458.086909]  [&amp;lt;ffffffffa048586f&amp;gt;] ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1458.088097]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1458.089098]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1458.090171]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1458.091005]  [&amp;lt;ffffffffa04883de&amp;gt;] ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1458.091991]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1458.092988]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1458.093775]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1458.094750]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1458.095723]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1458.096542]
[ 1458.096780] LustreError: dumping log to /tmp/lustre-log.1347326260.1041
[ 1500.096007] BUG: soft lockup - CPU#4 stuck &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 67s! [mdt01_003:1041]
[ 1500.096998] Modules linked in: lustre obdfilter ost cmm mdt osd_ldiskfs fsfilt_ldiskfs ldiskfs mdd mds mgs lquota obdecho mgc lov osc mdc lmv fid fld ptlrpc obdclass lvfs ksocklnd lnet libcfs ext2 exportfs jbd sha512_generic sha256_generic sunrpc ipv6 microcode virtio_balloon virtio_net i2c_piix4 i2c_core ext4 mbcache jbd2 virtio_blk virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: libcfs]
[ 1500.097004] CPU 4
[ 1500.097004] Modules linked in: lustre obdfilter ost cmm mdt osd_ldiskfs fsfilt_ldiskfs ldiskfs mdd mds mgs lquota obdecho mgc lov osc mdc lmv fid fld ptlrpc obdclass lvfs ksocklnd lnet libcfs ext2 exportfs jbd sha512_generic sha256_generic sunrpc ipv6 microcode virtio_balloon virtio_net i2c_piix4 i2c_core ext4 mbcache jbd2 virtio_blk virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: libcfs]
[ 1500.097004]
[ 1500.097004] Pid: 1041, comm: mdt01_003 Not tainted 2.6.32-debug #3 Bochs Bochs
[ 1500.097004] RIP: 0010:[&amp;lt;ffffffff8127db02&amp;gt;]  [&amp;lt;ffffffff8127db02&amp;gt;] memmove+0x42/0x1a0
[ 1500.097004] RSP: 0018:ffff8802056dd498  EFLAGS: 00010282
[ 1500.097004] RAX: ffff880231b7c03c RBX: ffff8802056dd4e0 RCX: 00000000000000ee
[ 1500.097004] RDX: fffffffffff7bfec RSI: ffff880231bfffe8 RDI: ffff880231bffffc
[ 1500.097004] RBP: ffffffff8100bc0e R08: 0000000000000000 R09: 0000000000000000
[ 1500.097004] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88021d674004
[ 1500.097004] R13: ffff880231b7c028 R14: ffff880231b7c000 R15: 0000000000000002
[ 1500.097004] FS:  00007effccfcf700(0000) GS:ffff880028300000(0000) knlGS:0000000000000000
[ 1500.097004] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 1500.097004] CR2: ffff880231c00000 CR3: 000000025a3ed000 CR4: 00000000000006e0
[ 1500.097004] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1500.097004] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1500.097004] &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; mdt01_003 (pid: 1041, threadinfo ffff8802056dc000, task ffff88024430a380)
[ 1500.097004] Stack:
[ 1500.097004]  ffffffffa0aab3a8 00000000000000ee 00000000000000ee ffff880294f9df58
[ 1500.097004] &amp;lt;d&amp;gt; ffff8802056dd640 ffff8802056dd608 ffff8802056dd6e8 0000000000000fd8
[ 1500.097004] &amp;lt;d&amp;gt; ffff88021d674000 ffff8802056dd500 ffffffffa0aab430 ffff88024430a380
[ 1500.097004] Call Trace:
[ 1500.097004]  [&amp;lt;ffffffffa0aab3a8&amp;gt;] ? iam_insert_key+0x68/0xb0 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0aab430&amp;gt;] ? iam_insert_key_lock+0x40/0x50 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0aae7ed&amp;gt;] ? iam_lfix_split+0x12d/0x150 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0aadc8d&amp;gt;] ? iam_it_rec_insert+0x20d/0x300 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0aade21&amp;gt;] ? iam_insert+0xa1/0xb0 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0aa9467&amp;gt;] ? osd_oi_insert+0x1e7/0x5b0 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0a9cef5&amp;gt;] ? __osd_oi_insert+0x145/0x1e0 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0aa1d48&amp;gt;] ? osd_object_ea_create+0x1d8/0x460 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa09721dc&amp;gt;] ? mdd_object_create_internal+0x13c/0x2a0 [mdd]
[ 1500.097004]  [&amp;lt;ffffffffa0992aba&amp;gt;] ? mdd_create+0x16ba/0x20c0 [mdd]
[ 1500.097004]  [&amp;lt;ffffffffa0a9fd7f&amp;gt;] ? osd_xattr_get+0x9f/0x360 [osd_ldiskfs]
[ 1500.097004]  [&amp;lt;ffffffffa0bb3557&amp;gt;] ? cml_create+0x97/0x250 [cmm]
[ 1500.097004]  [&amp;lt;ffffffffa0b25d0f&amp;gt;] ? mdt_version_get_save+0x8f/0xd0 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b398bf&amp;gt;] ? mdt_reint_open+0x108f/0x18a0 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa099860e&amp;gt;] ? md_ucred+0x1e/0x60 [mdd]
[ 1500.097004]  [&amp;lt;ffffffffa0b071c5&amp;gt;] ? mdt_ucred+0x15/0x20 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b23081&amp;gt;] ? mdt_reint_rec+0x41/0xe0 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b1c42a&amp;gt;] ? mdt_reint_internal+0x50a/0x810 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b1c9fd&amp;gt;] ? mdt_intent_reint+0x1ed/0x500 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b19041&amp;gt;] ? mdt_intent_policy+0x371/0x6a0 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa042fb9a&amp;gt;] ? ldlm_lock_enqueue+0x2ea/0x890 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffffa045744f&amp;gt;] ? ldlm_handle_enqueue0+0x48f/0xf70 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffffa0b18ad6&amp;gt;] ? mdt_enqueue+0x46/0x130 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b0ed42&amp;gt;] ? mdt_handle_common+0x922/0x1740 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa0b0fc35&amp;gt;] ? mdt_regular_handle+0x15/0x20 [mdt]
[ 1500.097004]  [&amp;lt;ffffffffa048586f&amp;gt;] ? ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1500.097004]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1500.097004]  [&amp;lt;ffffffffa04883de&amp;gt;] ? ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffff8100c14a&amp;gt;] ? child_rip+0xa/0x20
[ 1500.097004]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1500.097004]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1500.097004] Code: d0 49 39 f8 0f 8f 9f 00 00 00 48 81 fa a8 02 00 00 72 05 40 38 fe 74 41 48 83 ea 20 48 83 ea 20 4c 8b 1e 4c 8b 56 08 4c 8b 4e 10 &amp;lt;4c&amp;gt; 8b 46 18 48 8d 76 20 4c 89 1f 4c 89 57 08 4c 89 4f 10 4c 89
...
[ 1560.492304] INFO: task jbd2/loop0-8:32349 blocked &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; more than 120 seconds.
[ 1560.493247] &lt;span class=&quot;code-quote&quot;&gt;&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot;&lt;/span&gt; disables &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; message.
[ 1560.494302] jbd2/loop0-8  D 0000000000000002  5152 32349      2 0x00000000
[ 1560.495213]  ffff88026046dd10 0000000000000046 00000000000167c0 00000000000167c0
[ 1560.496212]  ffff880028310960 00000000000167c0 00000000000167c0 0000000000000286
[ 1560.497246]  ffff8802481ce738 ffff88026046dfd8 000000000000fba8 ffff8802481ce738
[ 1560.498252] Call Trace:
[ 1560.498574]  [&amp;lt;ffffffff8109004e&amp;gt;] ? prepare_to_wait+0x4e/0x80
[ 1560.499316]  [&amp;lt;ffffffffa0076afd&amp;gt;] jbd2_journal_commit_transaction+0x19d/0x16e0 [jbd2]
[ 1560.500335]  [&amp;lt;ffffffff81009310&amp;gt;] ? __switch_to+0xd0/0x320
[ 1560.501048]  [&amp;lt;ffffffff814fc4ae&amp;gt;] ? _spin_unlock_irq+0xe/0x20
[ 1560.501784]  [&amp;lt;ffffffff8108fd60&amp;gt;] ? autoremove_wake_function+0x0/0x40
[ 1560.502618]  [&amp;lt;ffffffffa007d627&amp;gt;] kjournald2+0xb7/0x210 [jbd2]
[ 1560.503379]  [&amp;lt;ffffffff8108fd60&amp;gt;] ? autoremove_wake_function+0x0/0x40
[ 1560.504209]  [&amp;lt;ffffffffa007d570&amp;gt;] ? kjournald2+0x0/0x210 [jbd2]
[ 1560.504993]  [&amp;lt;ffffffff8108fa16&amp;gt;] kthread+0x96/0xa0
[ 1560.505631]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1560.506274]  [&amp;lt;ffffffff8108f980&amp;gt;] ? kthread+0x0/0xa0
[ 1560.506919]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;On first run I also hit a disk corruption in that same test before soft lockup:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[ 4158.782961] Lustre: DEBUG MARKER: == sanity test 116: stripe QOS: free space balance ===================== 20:44:10 (1347324250)
[ 4178.796623] LDISKFS-fs error (device loop2): file system corruption: inode #8 logical block 3108 mapped to 7268 (size 1)
[ 4178.798364] Aborting journal on device loop2-8.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Also the ost2 fs state changed:&lt;br/&gt;
/dev/loop2             64Z   64Z  159M 100% /mnt/ost2&lt;/p&gt;

&lt;p&gt;After a reboot I did fsck on mdt fs and found this:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[root@rhel6 tests]# e2fsck -f -n /tmp/lustre-mdt1
e2fsck 1.41.90.wc1 (18-Mar-2011)
Pass 1: Checking inodes, blocks, and sizes
Inode 33445 is a zero-length directory.  Clear? no

Inode 33445, i_size is 4096, should be 0.  Fix? no

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Unconnected directory inode 33445 (...)
Connect to /lost+found? no

Pass 4: Checking reference counts
Unattached inode 33445
Connect to /lost+found? no

Pass 5: Checking group summary information

lustre-MDT0000: ********** WARNING: Filesystem still has errors **********
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="15869">LU-1881</key>
            <summary>sanity test 116 soft lockup</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="1" iconUrl="https://jira.whamcloud.com/images/icons/priorities/blocker.svg">Blocker</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="yong.fan">nasf</assignee>
                                    <reporter username="green">Oleg Drokin</reporter>
                        <labels>
                    </labels>
                <created>Mon, 10 Sep 2012 21:55:21 +0000</created>
                <updated>Mon, 3 Dec 2012 15:00:50 +0000</updated>
                            <resolved>Wed, 19 Sep 2012 09:56:37 +0000</resolved>
                                    <version>Lustre 2.3.0</version>
                                    <fixVersion>Lustre 2.3.0</fixVersion>
                    <fixVersion>Lustre 2.4.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                            <comments>
                            <comment id="44556" author="green" created="Mon, 10 Sep 2012 22:02:17 +0000"  >&lt;p&gt;Another bunch of traces from a third run:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[ 1411.405290] LNet: Service thread pid 3559 was inactive &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 62.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; debugging purposes:
[ 1411.407862] Pid: 3559, comm: mdt01_003
[ 1411.408447]
[ 1411.408447] Call Trace:
[ 1411.409101]  [&amp;lt;ffffffffa0b0ed42&amp;gt;] ? mdt_handle_common+0x922/0x1740 [mdt]
[ 1411.410122]  [&amp;lt;ffffffffa0b0fc35&amp;gt;] mdt_regular_handle+0x15/0x20 [mdt]
[ 1411.411112]  [&amp;lt;ffffffffa048586f&amp;gt;] ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1411.412295]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1411.413300]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1411.414331]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1411.415154]  [&amp;lt;ffffffffa04883de&amp;gt;] ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1411.416136]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1411.417208]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1411.418228]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1411.419448]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1411.420654]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1411.421667]
[ 1411.421987] LustreError: dumping log to /tmp/lustre-log.1347328442.3559
[ 1420.533547] LNet: Service thread pid 2359 was inactive &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 40.00s. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; debugging purposes:
[ 1420.536821] Pid: 2359, comm: ll_ost_io01_001
[ 1420.537654]
[ 1420.537655] Call Trace:
[ 1420.538463]  [&amp;lt;ffffffff81125463&amp;gt;] ? __alloc_pages_nodemask+0x123/0x9e0
[ 1420.539739]  [&amp;lt;ffffffff8109004e&amp;gt;] ? prepare_to_wait+0x4e/0x80
[ 1420.540909]  [&amp;lt;ffffffffa0076335&amp;gt;] do_get_write_access+0x2b5/0x550 [jbd2]
[ 1420.542246]  [&amp;lt;ffffffff8108fda0&amp;gt;] ? wake_bit_function+0x0/0x50
[ 1420.543397]  [&amp;lt;ffffffffa0076751&amp;gt;] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
[ 1420.544875]  [&amp;lt;ffffffffa09f54b8&amp;gt;] __ldiskfs_journal_get_write_access+0x38/0x80 [ldiskfs]
[ 1420.546465]  [&amp;lt;ffffffffa0a09442&amp;gt;] ldiskfs_mb_mark_diskspace_used+0xf2/0x300 [ldiskfs]
[ 1420.548030]  [&amp;lt;ffffffffa0a10e2f&amp;gt;] ldiskfs_mb_new_blocks+0x2af/0x5b0 [ldiskfs]
[ 1420.549478]  [&amp;lt;ffffffffa09f753e&amp;gt;] ? ldiskfs_ext_find_extent+0x2ce/0x330 [ldiskfs]
[ 1420.550966]  [&amp;lt;ffffffffa0a7a1da&amp;gt;] ldiskfs_ext_new_extent_cb+0x59a/0x6d0 [fsfilt_ldiskfs]
[ 1420.552845]  [&amp;lt;ffffffffa09f76ef&amp;gt;] ldiskfs_ext_walk_space+0x14f/0x340 [ldiskfs]
[ 1420.554682]  [&amp;lt;ffffffffa0a79c40&amp;gt;] ? ldiskfs_ext_new_extent_cb+0x0/0x6d0 [fsfilt_ldiskfs]
[ 1420.556445]  [&amp;lt;ffffffffa0a79968&amp;gt;] fsfilt_map_nblocks+0xd8/0x100 [fsfilt_ldiskfs]
[ 1420.557642]  [&amp;lt;ffffffffa0a79aa3&amp;gt;] fsfilt_ldiskfs_map_ext_inode_pages+0x113/0x220 [fsfilt_ldiskfs]
[ 1420.559032]  [&amp;lt;ffffffff814fa75e&amp;gt;] ? mutex_unlock+0xe/0x10
[ 1420.559858]  [&amp;lt;ffffffffa0a79c35&amp;gt;] fsfilt_ldiskfs_map_inode_pages+0x85/0x90 [fsfilt_ldiskfs]
[ 1420.561187]  [&amp;lt;ffffffffa05cae3b&amp;gt;] filter_alloc_iobuf+0x8fb/0x11f0 [obdfilter]
[ 1420.562304]  [&amp;lt;ffffffffa05cc9ec&amp;gt;] filter_commitrw_write+0x12bc/0x2eb8 [obdfilter]
[ 1420.563443]  [&amp;lt;ffffffff8116145a&amp;gt;] ? cache_alloc_debugcheck_after+0x14a/0x210
[ 1420.564548]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.565574]  [&amp;lt;ffffffff81160af6&amp;gt;] ? kfree_debugcheck+0x16/0x40
[ 1420.566476]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.567469]  [&amp;lt;ffffffffa05bfea5&amp;gt;] filter_commitrw+0x285/0x2b0 [obdfilter]
[ 1420.568538]  [&amp;lt;ffffffff814fc4fe&amp;gt;] ? _spin_unlock+0xe/0x10
[ 1420.569390]  [&amp;lt;ffffffffa0be3bc8&amp;gt;] obd_commitrw+0x128/0x3d0 [ost]
[ 1420.570316]  [&amp;lt;ffffffffa0beb1e9&amp;gt;] ost_brw_write+0xd29/0x1610 [ost]
[ 1420.571276]  [&amp;lt;ffffffff8127c326&amp;gt;] ? vsnprintf+0x2b6/0x5f0
[ 1420.572146]  [&amp;lt;ffffffffa0437fa0&amp;gt;] ? target_bulk_timeout+0x0/0xc0 [ptlrpc]
[ 1420.573215]  [&amp;lt;ffffffffa0bf0c26&amp;gt;] ost_handle+0x3096/0x4320 [ost]
[ 1420.574175]  [&amp;lt;ffffffffa0ca23f4&amp;gt;] ? libcfs_id2str+0x74/0xb0 [libcfs]
[ 1420.575154]  [&amp;lt;ffffffffa048586f&amp;gt;] ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1420.576324]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1420.577314]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1420.578359]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1420.579165]  [&amp;lt;ffffffffa04883de&amp;gt;] ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1420.580147]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.581128]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1420.581942]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.582895]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.583854]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1420.584658]
[ 1420.584901] LustreError: dumping log to /tmp/lustre-log.1347328451.2359
[ 1420.586271] Pid: 6961, comm: ll_ost_io00_005
[ 1420.586940]
[ 1420.586940] Call Trace:
[ 1420.587546]  [&amp;lt;ffffffff8108fd4f&amp;gt;] ? wake_up_bit+0x2f/0x40
[ 1420.588372]  [&amp;lt;ffffffff8109004e&amp;gt;] ? prepare_to_wait+0x4e/0x80
[ 1420.589294]  [&amp;lt;ffffffffa0076335&amp;gt;] do_get_write_access+0x2b5/0x550 [jbd2]
[ 1420.590334]  [&amp;lt;ffffffff8108fda0&amp;gt;] ? wake_bit_function+0x0/0x50
[ 1420.591233]  [&amp;lt;ffffffffa09ffee6&amp;gt;] ? ldiskfs_mark_iloc_dirty+0x376/0x5d0 [ldiskfs]
[ 1420.592390]  [&amp;lt;ffffffffa0076751&amp;gt;] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
[ 1420.593581]  [&amp;lt;ffffffffa09f54b8&amp;gt;] __ldiskfs_journal_get_write_access+0x38/0x80 [ldiskfs]
[ 1420.594845]  [&amp;lt;ffffffffa0a09442&amp;gt;] ldiskfs_mb_mark_diskspace_used+0xf2/0x300 [ldiskfs]
[ 1420.596048]  [&amp;lt;ffffffff814fc4fe&amp;gt;] ? _spin_unlock+0xe/0x10
[ 1420.596887]  [&amp;lt;ffffffffa0a10e2f&amp;gt;] ldiskfs_mb_new_blocks+0x2af/0x5b0 [ldiskfs]
[ 1420.598071]  [&amp;lt;ffffffffa09f753e&amp;gt;] ? ldiskfs_ext_find_extent+0x2ce/0x330 [ldiskfs]
[ 1420.599223]  [&amp;lt;ffffffffa0a7a1da&amp;gt;] ldiskfs_ext_new_extent_cb+0x59a/0x6d0 [fsfilt_ldiskfs]
[ 1420.600481]  [&amp;lt;ffffffffa09f76ef&amp;gt;] ldiskfs_ext_walk_space+0x14f/0x340 [ldiskfs]
[ 1420.601621]  [&amp;lt;ffffffffa0a79c40&amp;gt;] ? ldiskfs_ext_new_extent_cb+0x0/0x6d0 [fsfilt_ldiskfs]
[ 1420.602892]  [&amp;lt;ffffffffa0a79968&amp;gt;] fsfilt_map_nblocks+0xd8/0x100 [fsfilt_ldiskfs]
[ 1420.604051]  [&amp;lt;ffffffffa0a79aa3&amp;gt;] fsfilt_ldiskfs_map_ext_inode_pages+0x113/0x220 [fsfilt_ldiskfs]
[ 1420.605478]  [&amp;lt;ffffffff814fa75e&amp;gt;] ? mutex_unlock+0xe/0x10
[ 1420.606305]  [&amp;lt;ffffffffa0a79c35&amp;gt;] fsfilt_ldiskfs_map_inode_pages+0x85/0x90 [fsfilt_ldiskfs]
[ 1420.607590]  [&amp;lt;ffffffffa05cae3b&amp;gt;] filter_alloc_iobuf+0x8fb/0x11f0 [obdfilter]
[ 1420.608693]  [&amp;lt;ffffffffa05cc9ec&amp;gt;] filter_commitrw_write+0x12bc/0x2eb8 [obdfilter]
[ 1420.609845]  [&amp;lt;ffffffff8123d76c&amp;gt;] ? crypto_create_tfm+0x3c/0xe0
[ 1420.610749]  [&amp;lt;ffffffff8116145a&amp;gt;] ? cache_alloc_debugcheck_after+0x14a/0x210
[ 1420.611831]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.612856]  [&amp;lt;ffffffff81160af6&amp;gt;] ? kfree_debugcheck+0x16/0x40
[ 1420.613800]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.614783]  [&amp;lt;ffffffffa05bfea5&amp;gt;] filter_commitrw+0x285/0x2b0 [obdfilter]
[ 1420.615837]  [&amp;lt;ffffffff814fc4fe&amp;gt;] ? _spin_unlock+0xe/0x10
[ 1420.616704]  [&amp;lt;ffffffffa0be3bc8&amp;gt;] obd_commitrw+0x128/0x3d0 [ost]
[ 1420.617658]  [&amp;lt;ffffffffa0beb1e9&amp;gt;] ost_brw_write+0xd29/0x1610 [ost]
[ 1420.618611]  [&amp;lt;ffffffff8127c326&amp;gt;] ? vsnprintf+0x2b6/0x5f0
[ 1420.619467]  [&amp;lt;ffffffffa0437fa0&amp;gt;] ? target_bulk_timeout+0x0/0xc0 [ptlrpc]
[ 1420.620536]  [&amp;lt;ffffffffa0bf0c26&amp;gt;] ost_handle+0x3096/0x4320 [ost]
[ 1420.621512]  [&amp;lt;ffffffffa0ca23f4&amp;gt;] ? libcfs_id2str+0x74/0xb0 [libcfs]
[ 1420.622518]  [&amp;lt;ffffffffa048586f&amp;gt;] ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1420.623691]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1420.624702]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1420.625752]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1420.626577]  [&amp;lt;ffffffffa04883de&amp;gt;] ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1420.627560]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.628584]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1420.629406]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.630398]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.631363]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1420.632186]
[ 1420.632421] Pid: 3346, comm: ll_ost_io00_003
[ 1420.633119]
[ 1420.633119] Call Trace:
[ 1420.633737]  [&amp;lt;ffffffffa09f527b&amp;gt;] ? __ldiskfs_handle_dirty_metadata+0x7b/0x100 [ldiskfs]
[ 1420.635001]  [&amp;lt;ffffffffa09ffee6&amp;gt;] ? ldiskfs_mark_iloc_dirty+0x376/0x5d0 [ldiskfs]
[ 1420.636144]  [&amp;lt;ffffffff814fabd8&amp;gt;] __mutex_lock_slowpath+0x128/0x2c0
[ 1420.637132]  [&amp;lt;ffffffff814fada1&amp;gt;] mutex_lock+0x31/0x50
[ 1420.637963]  [&amp;lt;ffffffffa0a08f0b&amp;gt;] ldiskfs_mb_initialize_context+0x17b/0x1f0 [ldiskfs]
[ 1420.639154]  [&amp;lt;ffffffffa0a10d09&amp;gt;] ldiskfs_mb_new_blocks+0x189/0x5b0 [ldiskfs]
[ 1420.640253]  [&amp;lt;ffffffffa09f753e&amp;gt;] ? ldiskfs_ext_find_extent+0x2ce/0x330 [ldiskfs]
[ 1420.641422]  [&amp;lt;ffffffffa0a7a1da&amp;gt;] ldiskfs_ext_new_extent_cb+0x59a/0x6d0 [fsfilt_ldiskfs]
[ 1420.642677]  [&amp;lt;ffffffffa09f76ef&amp;gt;] ldiskfs_ext_walk_space+0x14f/0x340 [ldiskfs]
[ 1420.643787]  [&amp;lt;ffffffffa0a79c40&amp;gt;] ? ldiskfs_ext_new_extent_cb+0x0/0x6d0 [fsfilt_ldiskfs]
[ 1420.645080]  [&amp;lt;ffffffffa0a79968&amp;gt;] fsfilt_map_nblocks+0xd8/0x100 [fsfilt_ldiskfs]
[ 1420.646250]  [&amp;lt;ffffffffa0a79aa3&amp;gt;] fsfilt_ldiskfs_map_ext_inode_pages+0x113/0x220 [fsfilt_ldiskfs]
[ 1420.647617]  [&amp;lt;ffffffff814fa75e&amp;gt;] ? mutex_unlock+0xe/0x10
[ 1420.648465]  [&amp;lt;ffffffffa0a79c35&amp;gt;] fsfilt_ldiskfs_map_inode_pages+0x85/0x90 [f
sfilt_ldiskfs]
[ 1420.649777]  [&amp;lt;ffffffffa05cae3b&amp;gt;] filter_alloc_iobuf+0x8fb/0x11f0 [obdfilter]
[ 1420.650881]  [&amp;lt;ffffffffa05cc9ec&amp;gt;] filter_commitrw_write+0x12bc/0x2eb8 [obdfilter]
[ 1420.652058]  [&amp;lt;ffffffff8123d76c&amp;gt;] ? crypto_create_tfm+0x3c/0xe0
[ 1420.652994]  [&amp;lt;ffffffff8116145a&amp;gt;] ? cache_alloc_debugcheck_after+0x14a/0x210
[ 1420.654112]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.655108]  [&amp;lt;ffffffff81160af6&amp;gt;] ? kfree_debugcheck+0x16/0x40
[ 1420.656015]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.657029]  [&amp;lt;ffffffffa05bfea5&amp;gt;] filter_commitrw+0x285/0x2b0 [obdfilter]
[ 1420.658070]  [&amp;lt;ffffffff814fc4fe&amp;gt;] ? _spin_unlock+0xe/0x10
[ 1420.658894]  [&amp;lt;ffffffffa0be3bc8&amp;gt;] obd_commitrw+0x128/0x3d0 [ost]
[ 1420.659819]  [&amp;lt;ffffffffa0beb1e9&amp;gt;] ost_brw_write+0xd29/0x1610 [ost]
[ 1420.660789]  [&amp;lt;ffffffff8127c326&amp;gt;] ? vsnprintf+0x2b6/0x5f0
[ 1420.661670]  [&amp;lt;ffffffffa0437fa0&amp;gt;] ? target_bulk_timeout+0x0/0xc0 [ptlrpc]
[ 1420.662739]  [&amp;lt;ffffffffa0bf0c26&amp;gt;] ost_handle+0x3096/0x4320 [ost]
[ 1420.663682]  [&amp;lt;ffffffffa0ca23f4&amp;gt;] ? libcfs_id2str+0x74/0xb0 [libcfs]
[ 1420.664700]  [&amp;lt;ffffffffa048586f&amp;gt;] ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1420.665900]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1420.666879]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1420.667936]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1420.668771]  [&amp;lt;ffffffffa04883de&amp;gt;] ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1420.669792]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.670767]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1420.671562]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.672569]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.673537]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1420.674325]
[ 1420.674562] Pid: 2355, comm: ll_ost_io00_000
[ 1420.675240]
[ 1420.675240] Call Trace:
[ 1420.675856]  [&amp;lt;ffffffffa0076335&amp;gt;] do_get_write_access+0x2b5/0x550 [jbd2]
[ 1420.676914]  [&amp;lt;ffffffff8108fda0&amp;gt;] ? wake_bit_function+0x0/0x50
[ 1420.677841]  [&amp;lt;ffffffffa09ffee6&amp;gt;] ? ldiskfs_mark_iloc_dirty+0x376/0x5d0 [ldiskfs]
[ 1420.679019]  [&amp;lt;ffffffffa0076751&amp;gt;] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
[ 1420.680163]  [&amp;lt;ffffffffa09f54b8&amp;gt;] __ldiskfs_journal_get_write_access+0x38/0x80 [ldiskfs]
[ 1420.681444]  [&amp;lt;ffffffffa0a09442&amp;gt;] ldiskfs_mb_mark_diskspace_used+0xf2/0x300 [ldiskfs]
[ 1420.682678]  [&amp;lt;ffffffff814fc4fe&amp;gt;] ? _spin_unlock+0xe/0x10
[ 1420.683532]  [&amp;lt;ffffffffa0a10e2f&amp;gt;] ldiskfs_mb_new_blocks+0x2af/0x5b0 [ldiskfs]
[ 1420.684649]  [&amp;lt;ffffffffa09f753e&amp;gt;] ? ldiskfs_ext_find_extent+0x2ce/0x330 [ldiskfs]
[ 1420.685832]  [&amp;lt;ffffffffa0a7a1da&amp;gt;] ldiskfs_ext_new_extent_cb+0x59a/0x6d0 [fsfilt_ldiskfs]
[ 1420.687092]  [&amp;lt;ffffffffa09f76ef&amp;gt;] ldiskfs_ext_walk_space+0x14f/0x340 [ldiskfs]
[ 1420.688206]  [&amp;lt;ffffffffa0a79c40&amp;gt;] ? ldiskfs_ext_new_extent_cb+0x0/0x6d0 [fsfilt_ldiskfs]
[ 1420.689484]  [&amp;lt;ffffffffa0a79968&amp;gt;] fsfilt_map_nblocks+0xd8/0x100 [fsfilt_ldiskfs]
[ 1420.690618]  [&amp;lt;ffffffffa0a79aa3&amp;gt;] fsfilt_ldiskfs_map_ext_inode_pages+0x113/0x220 [fsfilt_ldiskfs]
[ 1420.691987]  [&amp;lt;ffffffff814fa75e&amp;gt;] ? mutex_unlock+0xe/0x10
[ 1420.692818]  [&amp;lt;ffffffffa0a79c35&amp;gt;] fsfilt_ldiskfs_map_inode_pages+0x85/0x90 [fsfilt_ldiskfs]
[ 1420.694142]  [&amp;lt;ffffffffa05cae3b&amp;gt;] filter_alloc_iobuf+0x8fb/0x11f0 [obdfilter]
[ 1420.695230]  [&amp;lt;ffffffffa05cc9ec&amp;gt;] filter_commitrw_write+0x12bc/0x2eb8 [obdfilter]
[ 1420.696373]  [&amp;lt;ffffffff8123d76c&amp;gt;] ? crypto_create_tfm+0x3c/0xe0
[ 1420.697323]  [&amp;lt;ffffffff8116145a&amp;gt;] ? cache_alloc_debugcheck_after+0x14a/0x210
[ 1420.698425]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.699417]  [&amp;lt;ffffffff81160af6&amp;gt;] ? kfree_debugcheck+0x16/0x40
[ 1420.700320]  [&amp;lt;ffffffff8116175e&amp;gt;] ? cache_free_debugcheck+0x1be/0x360
[ 1420.701364]  [&amp;lt;ffffffffa05bfea5&amp;gt;] filter_commitrw+0x285/0x2b0 [obdfilter]
[ 1420.702431]  [&amp;lt;ffffffff814fc4fe&amp;gt;] ? _spin_unlock+0xe/0x10
[ 1420.703289]  [&amp;lt;ffffffffa0be3bc8&amp;gt;] obd_commitrw+0x128/0x3d0 [ost]
[ 1420.704216]  [&amp;lt;ffffffffa0beb1e9&amp;gt;] ost_brw_write+0xd29/0x1610 [ost]
[ 1420.705192]  [&amp;lt;ffffffff8127c326&amp;gt;] ? vsnprintf+0x2b6/0x5f0
[ 1420.706063]  [&amp;lt;ffffffffa0437fa0&amp;gt;] ? target_bulk_timeout+0x0/0xc0 [ptlrpc]
[ 1420.707114]  [&amp;lt;ffffffffa0bf0c26&amp;gt;] ost_handle+0x3096/0x4320 [ost]
[ 1420.708055]  [&amp;lt;ffffffffa0ca23f4&amp;gt;] ? libcfs_id2str+0x74/0xb0 [libcfs]
[ 1420.709082]  [&amp;lt;ffffffffa048586f&amp;gt;] ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1420.710272]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1420.711248]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1420.712314]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1420.713165]  [&amp;lt;ffffffffa04883de&amp;gt;] ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1420.714148]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.715099]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1420.715871]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.716895]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1420.717902]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1420.718689]
[ 1420.718942] LNet: Service thread pid 2357 was inactive &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 40.18s. Watchdog stack traces are limited to 3 per 300 seconds, skipping &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; one.
...
[ 1512.104006] BUG: soft lockup - CPU#5 stuck &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; 67s! [mdt01_003:3559]
[ 1512.104819] Modules linked in: lustre obdfilter ost cmm mdt osd_ldiskfs fsfilt_ldiskfs ldiskfs mdd mds mgs lquota obdecho mgc lov osc mdc lmv fid fld ptlrpc obdclass lvfs ksocklnd lnet libcfs ext2 exportfs jbd sha512_generic sha256_generic sunrpc ipv6 microcode virtio_balloon virtio_net i2c_piix4 i2c_core ext4 mbcache jbd2 virtio_blk virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: libcfs]
[ 1512.105255] CPU 5
[ 1512.105255] Modules linked in: lustre obdfilter ost cmm mdt osd_ldiskfs fsfilt_ldiskfs ldiskfs mdd mds mgs lquota obdecho mgc lov osc mdc lmv fid fld ptlrpc obdclass lvfs ksocklnd lnet libcfs ext2 exportfs jbd sha512_generic sha256_generic sunrpc ipv6 microcode virtio_balloon virtio_net i2c_piix4 i2c_core ext4 mbcache jbd2 virtio_blk virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: libcfs]
[ 1512.105255]
[ 1512.105255] Pid: 3559, comm: mdt01_003 Not tainted 2.6.32-debug #3 Bochs Bochs
[ 1512.105255] RIP: 0010:[&amp;lt;ffffffff8127db02&amp;gt;]  [&amp;lt;ffffffff8127db02&amp;gt;] memmove+0x42/0x1a0
[ 1512.105255] RSP: 0018:ffff8802160d5498  EFLAGS: 00010282
[ 1512.105255] RAX: ffff8801eab1d03c RBX: ffff8802160d54e0 RCX: 00000000000000ee
[ 1512.105255] RDX: fffffffffff5cfec RSI: ffff8801eabbffe8 RDI: ffff8801eabbfffc
[ 1512.105255] RBP: ffffffff8100bc0e R08: 0000000000000000 R09: 0000000000000000
[ 1512.105255] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88021f70b004
[ 1512.105255] R13: ffff8801eab1d028 R14: ffff8801eab1d000 R15: 0000000000000002
[ 1512.105255] FS:  00007f784cf7b700(0000) GS:ffff880028340000(0000) knlGS:0000000000000000
[ 1512.105255] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 1512.105255] CR2: ffff8801eabc0000 CR3: 0000000001a25000 CR4: 00000000000006e0
[ 1512.105255] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1512.105255] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1512.105255] &lt;span class=&quot;code-object&quot;&gt;Process&lt;/span&gt; mdt01_003 (pid: 3559, threadinfo ffff8802160d4000, task ffff88025cbf41c0)
[ 1512.105255] Stack:
[ 1512.105255]  ffffffffa0aab3a8 00000000000000ee 00000000000000ee ffff88022cc66f58
[ 1512.105255] &amp;lt;d&amp;gt; ffff8802160d5640 ffff8802160d5608 ffff8802160d56e8 0000000000000fd8
[ 1512.105255] &amp;lt;d&amp;gt; ffff88021f70b000 ffff8802160d5500 ffffffffa0aab430 ffff88025cbf41c0
[ 1512.105255] Call Trace:
[ 1512.105255]  [&amp;lt;ffffffffa0aab3a8&amp;gt;] ? iam_insert_key+0x68/0xb0 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0aab430&amp;gt;] ? iam_insert_key_lock+0x40/0x50 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0aae7ed&amp;gt;] ? iam_lfix_split+0x12d/0x150 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0aadc8d&amp;gt;] ? iam_it_rec_insert+0x20d/0x300 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0aade21&amp;gt;] ? iam_insert+0xa1/0xb0 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0aa9467&amp;gt;] ? osd_oi_insert+0x1e7/0x5b0 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0a9cef5&amp;gt;] ? __osd_oi_insert+0x145/0x1e0 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0aa1d48&amp;gt;] ? osd_object_ea_create+0x1d8/0x460 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa09721dc&amp;gt;] ? mdd_object_create_internal+0x13c/0x2a0 [mdd]
[ 1512.105255]  [&amp;lt;ffffffffa0992aba&amp;gt;] ? mdd_create+0x16ba/0x20c0 [mdd]
[ 1512.105255]  [&amp;lt;ffffffffa0a9fd7f&amp;gt;] ? osd_xattr_get+0x9f/0x360 [osd_ldiskfs]
[ 1512.105255]  [&amp;lt;ffffffffa0bb3557&amp;gt;] ? cml_create+0x97/0x250 [cmm]
[ 1512.105255]  [&amp;lt;ffffffffa0b25d0f&amp;gt;] ? mdt_version_get_save+0x8f/0xd0 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b398bf&amp;gt;] ? mdt_reint_open+0x108f/0x18a0 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa099860e&amp;gt;] ? md_ucred+0x1e/0x60 [mdd]
[ 1512.105255]  [&amp;lt;ffffffffa0b071c5&amp;gt;] ? mdt_ucred+0x15/0x20 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b23081&amp;gt;] ? mdt_reint_rec+0x41/0xe0 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b1c42a&amp;gt;] ? mdt_reint_internal+0x50a/0x810 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b1c9fd&amp;gt;] ? mdt_intent_reint+0x1ed/0x500 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b19041&amp;gt;] ? mdt_intent_policy+0x371/0x6a0 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa042fb9a&amp;gt;] ? ldlm_lock_enqueue+0x2ea/0x890 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffffa045744f&amp;gt;] ? ldlm_handle_enqueue0+0x48f/0xf70 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffffa0b18ad6&amp;gt;] ? mdt_enqueue+0x46/0x130 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b0ed42&amp;gt;] ? mdt_handle_common+0x922/0x1740 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa0b0fc35&amp;gt;] ? mdt_regular_handle+0x15/0x20 [mdt]
[ 1512.105255]  [&amp;lt;ffffffffa048586f&amp;gt;] ? ptlrpc_server_handle_request+0x44f/0xee0 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffffa0c9666e&amp;gt;] ? cfs_timer_arm+0xe/0x10 [libcfs]
[ 1512.105255]  [&amp;lt;ffffffffa047e711&amp;gt;] ? ptlrpc_wait_event+0xb1/0x2a0 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffff81051f73&amp;gt;] ? __wake_up+0x53/0x70
[ 1512.105255]  [&amp;lt;ffffffffa04883de&amp;gt;] ? ptlrpc_main+0xaee/0x1800 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffff8100c14a&amp;gt;] ? child_rip+0xa/0x20
[ 1512.105255]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffffa04878f0&amp;gt;] ? ptlrpc_main+0x0/0x1800 [ptlrpc]
[ 1512.105255]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1512.105255] Code: d0 49 39 f8 0f 8f 9f 00 00 00 48 81 fa a8 02 00 00 72 05 40 38 fe 74 41 48 83 ea 20 48 83 ea 20 4c 8b 1e 4c 8b 56 08 4c 8b 4e 10 &amp;lt;4c&amp;gt; 8b 46 18 48 8d 76 20 4c 89 1f 4c 89 57 08 4c 89 4f 10 4c 89
[ 1560.488304] INFO: task jbd2/loop0-8:2178 blocked &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; more than 120 seconds.
[ 1560.489230] &lt;span class=&quot;code-quote&quot;&gt;&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot;&lt;/span&gt; disables &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; message.
[ 1560.490216] jbd2/loop0-8  D 0000000000000003  5264  2178      2 0x00000000[ 1560.491119]  ffff88026227fd10 0000000000000046 00000000000167c0 0000000000016
7c0
[ 1560.492112]  ffff880028290960 00000000000167c0 00000000000167c0 0000000000000
286[ 1560.493137]  ffff88028fb0e8f8 ffff88026227ffd8 000000000000fba8 ffff88028fb0e
8f8
[ 1560.494147] Call Trace:
[ 1560.494464]  [&amp;lt;ffffffff8109004e&amp;gt;] ? prepare_to_wait+0x4e/0x80
[ 1560.495198]  [&amp;lt;ffffffffa0076afd&amp;gt;] jbd2_journal_commit_transaction+0x19d/0x16e
0 [jbd2]
[ 1560.496185]  [&amp;lt;ffffffff81009310&amp;gt;] ? __switch_to+0xd0/0x320
[ 1560.496910]  [&amp;lt;ffffffff814fc4ae&amp;gt;] ? _spin_unlock_irq+0xe/0x20
[ 1560.497643]  [&amp;lt;ffffffff8108fd60&amp;gt;] ? autoremove_wake_function+0x0/0x40
[ 1560.498462]  [&amp;lt;ffffffffa007d627&amp;gt;] kjournald2+0xb7/0x210 [jbd2]
[ 1560.499200]  [&amp;lt;ffffffff8108fd60&amp;gt;] ? autoremove_wake_function+0x0/0x40
[ 1560.500019]  [&amp;lt;ffffffffa007d570&amp;gt;] ? kjournald2+0x0/0x210 [jbd2]
[ 1560.500797]  [&amp;lt;ffffffff8108fa16&amp;gt;] kthread+0x96/0xa0[ 1560.501445]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1560.502094]  [&amp;lt;ffffffff8108f980&amp;gt;] ? kthread+0x0/0xa0
[ 1560.502718]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
[ 1560.503387] INFO: task jbd2/loop1-8:2336 blocked &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; more than 120 seconds.
[ 1560.504285] &lt;span class=&quot;code-quote&quot;&gt;&quot;echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs&quot;&lt;/span&gt; disables &lt;span class=&quot;code-keyword&quot;&gt;this&lt;/span&gt; message.
[ 1560.505289] jbd2/loop1-8  D 0000000000000000  5280  2336      2 0x00000000
[ 1560.506192]  ffff8802467c9c10 0000000000000046 0000000000000000 ffff8802467c9bd4
[ 1560.507179]  ffff8802467c9b80 ffff88029fc24100 ffff8800283567c0 0000000000000000
[ 1560.508166]  ffff880260cea778 ffff8802467c9fd8 000000000000fba8 ffff880260cea778
[ 1560.509185] Call Trace:
[ 1560.509521]  [&amp;lt;ffffffff811b03a0&amp;gt;] ? sync_buffer+0x0/0x50
[ 1560.510202]  [&amp;lt;ffffffff814f9a33&amp;gt;] io_schedule+0x73/0xc0
[ 1560.510858]  [&amp;lt;ffffffff811b03e3&amp;gt;] sync_buffer+0x43/0x50
[ 1560.511520]  [&amp;lt;ffffffff814fa3ef&amp;gt;] __wait_on_bit+0x5f/0x90
[ 1560.512214]  [&amp;lt;ffffffff811b03a0&amp;gt;] ? sync_buffer+0x0/0x50
[ 1560.512924]  [&amp;lt;ffffffff814fa498&amp;gt;] out_of_line_wait_on_bit+0x78/0x90
[ 1560.513723]  [&amp;lt;ffffffff8108fda0&amp;gt;] ? wake_bit_function+0x0/0x50
[ 1560.514462]  [&amp;lt;ffffffff811b0396&amp;gt;] __wait_on_buffer+0x26/0x30
[ 1560.515185]  [&amp;lt;ffffffffa0077459&amp;gt;] jbd2_journal_commit_transaction+0xaf9/0x16e0 [jbd2]
[ 1560.516177]  [&amp;lt;ffffffff81009310&amp;gt;] ? __switch_to+0xd0/0x320
[ 1560.516901]  [&amp;lt;ffffffff8107c65b&amp;gt;] ? try_to_del_timer_sync+0x7b/0xe0
[ 1560.517732]  [&amp;lt;ffffffffa007d627&amp;gt;] kjournald2+0xb7/0x210 [jbd2]
[ 1560.518470]  [&amp;lt;ffffffff8108fd60&amp;gt;] ? autoremove_wake_function+0x0/0x40
[ 1560.519298]  [&amp;lt;ffffffffa007d570&amp;gt;] ? kjournald2+0x0/0x210 [jbd2]
[ 1560.520059]  [&amp;lt;ffffffff8108fa16&amp;gt;] kthread+0x96/0xa0
[ 1560.520705]  [&amp;lt;ffffffff8100c14a&amp;gt;] child_rip+0xa/0x20
[ 1560.521359]  [&amp;lt;ffffffff8108f980&amp;gt;] ? kthread+0x0/0xa0
[ 1560.522003]  [&amp;lt;ffffffff8100c140&amp;gt;] ? child_rip+0x0/0x20
...
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="44558" author="green" created="Mon, 10 Sep 2012 22:29:27 +0000"  >&lt;p&gt;Ok, after some more testing, doing ONLY=116 passes.&lt;/p&gt;

&lt;p&gt;REFORMAT=yes ONLY=&quot;115 116&quot; sh sanity.sh seems to be triggering this issue 100% for me. (just had another lockup + ost disk corruption).&lt;/p&gt;

&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[ 1294.835204] LDISKFS-fs error (device loop1): ldiskfs_init_block_bitmap: Checksum bad &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt; group 1
[ 1294.836610] Aborting journal on device loop1-8.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;My test system configuration: kvm with 8 cpu cores, 10G RAM.&lt;/p&gt;</comment>
                            <comment id="44562" author="green" created="Tue, 11 Sep 2012 00:59:48 +0000"  >&lt;p&gt;The real problem seems to be test 115, if I exclude it, all subsequent tests seem to be fine, but if I leave test 115 in, tests after it start to fail and as I exclude them next ones start to fail in a very similar manner.&lt;/p&gt;

&lt;p&gt;I have stack guard on btw and it does not trigger. the only trigger I see is around test 3 about &quot;rm&quot; using the most stack yet with about 2.5k still remaining, though I agree some stack traces do look quite big.&lt;/p&gt;</comment>
                            <comment id="44568" author="yong.fan" created="Tue, 11 Sep 2012 03:23:31 +0000"  >&lt;p&gt;The reason is that when shrink the OI index node to recycle idle leaf for the last entry, it makes the index node to be empty, then causes the subsequent IAM lookup/insert ops to access invalid space.&lt;/p&gt;

&lt;p&gt;The solution is that keep the last entry for the idle leaf in the OI index node, it can be reused directly when next new node added.&lt;/p&gt;

&lt;p&gt;This is the patch for that:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://review.whamcloud.com/#change,3931&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3931&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="44595" author="green" created="Tue, 11 Sep 2012 11:10:36 +0000"  >&lt;p&gt;Thanks.&lt;br/&gt;
The sanity no longer locks up after test 115, I still noticed that after the run is done e2fsck reports error on MDT:&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;[root@rhel6 ~]# e2fsck -f -n /tmp/lustre-mdt1 
e2fsck 1.41.90.wc1 (18-Mar-2011)
Pass 1: Checking inodes, blocks, and sizes
Inode 8297 is a zero-length directory.  Clear? no

Inode 8297, i_size is 4096, should be 0.  Fix? no

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Unconnected directory inode 8297 (...)
Connect to /lost+found? no

Pass 4: Checking reference counts
Unattached inode 8297
Connect to /lost+found? no

Pass 5: Checking group summary information

lustre-MDT0000: ********** WARNING: Filesystem still has errors **********
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="44674" author="yong.fan" created="Wed, 12 Sep 2012 07:32:01 +0000"  >&lt;p&gt;I cannot reproduce the e2fsck failure by myself. But according to the error message, it looks like some object was removed from the parent directory, but the object itself was not destroyed.&lt;/p&gt;

&lt;p&gt;One possible reason for that is related with partly unlink, because we do not declare enough credit for the unlink transaction, which may needs more credit for recycling idle OI leaf.&lt;/p&gt;

&lt;p&gt;I have updated the patch &lt;a href=&quot;http://review.whamcloud.com/#change,3931&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3931&lt;/a&gt; to fix that.&lt;/p&gt;</comment>
                            <comment id="44722" author="green" created="Wed, 12 Sep 2012 19:51:36 +0000"  >&lt;p&gt;That apparently did not help.&lt;br/&gt;
The way I reproduce it is run SLOW=yes REFORMAT=yes sh sanity.sh (takes some time) (this is tip of b2_3 with &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1644&quot; title=&quot;lustre b2_2&amp;lt;-&amp;gt;master failure on lustre-initialization-1: ASSERTION( entry-&amp;gt;mne_length &amp;lt;= ((1UL) &amp;lt;&amp;lt; 12) )&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1644&quot;&gt;&lt;del&gt;LU-1644&lt;/del&gt;&lt;/a&gt;, &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1823&quot; title=&quot;sanity/103: slab corruption&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1823&quot;&gt;&lt;del&gt;LU-1823&lt;/del&gt;&lt;/a&gt; and &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1881&quot; title=&quot;sanity test 116 soft lockup&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1881&quot;&gt;&lt;del&gt;LU-1881&lt;/del&gt;&lt;/a&gt; patches cherry-picked, since master is broken no for me due to &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1919&quot; title=&quot;Soft lockup on MGS stop&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1919&quot;&gt;&lt;del&gt;LU-1919&lt;/del&gt;&lt;/a&gt;).&lt;br/&gt;
After the run is finished, e2fsck -f -n on /tmp/lustre-mdt1 shows this error.&lt;/p&gt;</comment>
                            <comment id="44737" author="green" created="Wed, 12 Sep 2012 23:36:47 +0000"  >&lt;p&gt;Reduced the failure to test 51b: REFORMAT=yes ONLY=51b sh sanity.sh&lt;/p&gt;
&lt;div class=&quot;code panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;codeContent panelContent&quot;&gt;
&lt;pre class=&quot;code-java&quot;&gt;== sanity test 51b: mkdir .../t-0 --- .../t-70000 ====================== 23:18:05 (1347506285)
 - created 10000 (time 1347506301.81 total 16.47 last 16.47)
 - created 20000 (time 1347506318.40 total 33.05 last 16.58)
 - created 30000 (time 1347506340.06 total 54.71 last 21.66)
mkdir(/mnt/lustre/d51b/t-32335) error: No space left on device
total: 32335 creates in 59.82 seconds: 540.54 creates/second
 sanity test_51b: @@@@@@ FAIL: test_51b failed with 28 
  Trace dump:
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:3640:error_noexit()
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:3662:error()
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:3898:run_one()
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:3928:run_one_logged()
  = /home/green/git/lustre-release/lustre/tests/test-framework.sh:3750:run_test()
  = sanity.sh:3150:main()
Dumping lctl log to /tmp/test_logs/1347506256/sanity.test_51b.*.1347506345.log
Dumping logs only on local client.
FAIL 51b (60s)
...........................................................................................................................resend_count is set to 4 4 4 4
...........resend_count is set to 10 10 10 10
.................................................................................................== sanity sanity.sh test complete, duration 90 sec == 23:19:06 (1347506346)
sanity.sh: FAIL: test_51b test_51b failed with 28

Stopping clients: rhel6.localnet /mnt/lustre (opts:-f)
Stopping client rhel6.localnet /mnt/lustre opts:-f
Stopping clients: rhel6.localnet /mnt/lustre2 (opts:-f)
Stopping /mnt/mds1 (opts:-f) on rhel6.localnet
Stopping /mnt/ost1 (opts:-f) on rhel6.localnet
Stopping /mnt/ost2 (opts:-f) on rhel6.localnet
waited 0 &lt;span class=&quot;code-keyword&quot;&gt;for&lt;/span&gt;  5 ST ost OSS OSS_uuid 0
osd_ldiskfs           337297  0 
fsfilt_ldiskfs         86602  0 
ldiskfs               422919  2 osd_ldiskfs,fsfilt_ldiskfs
mdd                   419671  3 cmm,mdt,osd_ldiskfs
lquota                253915  5 obdfilter,osd_ldiskfs,mdd
fid                    70216  5 mdt,osd_ldiskfs,mdd,obdecho,mdc
obdclass             1160202  47 lustre,obdfilter,ost,cmm,mdt,osd_ldiskfs,fsfilt_ldiskfs,mdd,mds,mgs,lquota,obdecho,mgc,lov,osc,mdc,lmv,fid,fld,ptlrpc
lvfs                   38111  22 lustre,obdfilter,ost,cmm,mdt,osd_ldiskfs,fsfilt_ldiskfs,mdd,mds,mgs,lquota,obdecho,mgc,lov,osc,mdc,lmv,fid,fld,ptlrpc,obdclass
libcfs                490662  24 lustre,obdfilter,ost,cmm,mdt,osd_ldiskfs,fsfilt_ldiskfs,mdd,mds,mgs,lquota,obdecho,mgc,lov,osc,mdc,lmv,fid,fld,ptlrpc,obdclass,modules unloaded.
[root@rhel6 tests]# 
[root@rhel6 tests]# e2fsck -n -f /tmp/lustre-mdt1 
e2fsck 1.41.90.wc1 (18-Mar-2011)
Pass 1: Checking inodes, blocks, and sizes
Inode 33212 is a zero-length directory.  Clear? no

Inode 33212, i_size is 4096, should be 0.  Fix? no

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Unconnected directory inode 33212 (...)
Connect to /lost+found? no

Pass 4: Checking reference counts
Unattached inode 33212
Connect to /lost+found? no

Pass 5: Checking group summary information

lustre-MDT0000: ********** WARNING: Filesystem still has errors **********

lustre-MDT0000: 107/100000 files (3.7% non-contiguous), 17256/50000 blocks
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="44797" author="yong.fan" created="Thu, 13 Sep 2012 11:18:06 +0000"  >&lt;p&gt;Your failure is caused by partial create because of not enough space. It occurred in mdd_object_initialize() when mkdir, the inode for the dir was allocated, and &quot;.&quot; was inserted, but failed to insert &quot;..&quot; because of not enough space. Under such case, our rollback mechanism did not cleanup the environment clearly. Then left the inode there with non-zero i_nlink and non-zero size, but without added into parent directory.&lt;/p&gt;


&lt;p&gt;This is the patch for fixing:&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#change,3981&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,3981&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="45221" author="pjones" created="Wed, 19 Sep 2012 09:56:37 +0000"  >&lt;p&gt;Landed for 2.3 and 2.4&lt;/p&gt;</comment>
                            <comment id="48690" author="bogl" created="Mon, 3 Dec 2012 15:00:50 +0000"  >&lt;p&gt;back port to b2_1&lt;br/&gt;
&lt;a href=&quot;http://review.whamcloud.com/#change,4734&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/#change,4734&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                        <issuelink>
            <issuekey id="15901">LU-1906</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15907">LU-1909</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15942">LU-1925</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15943">LU-1927</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15945">LU-1928</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="15946">LU-1929</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="16015">LU-1968</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzv4h3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4258</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>