<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:15:12 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-1276] MDS threads all stuck in jbd2_journal_start</title>
                <link>https://jira.whamcloud.com/browse/LU-1276</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;The MDS on a classified production 2.1 lustre cluster got stuck today.  The symptoms were high load (800+), but very little CPU usage.&lt;/p&gt;

&lt;p&gt;Almost all of the lustre threads were stuck in jbd2_journal_start, while the jbd2/sda thread is stuck in&lt;br/&gt;
jbd2_journal_commit_transaction.  There is zero I/O going to disk.&lt;/p&gt;

&lt;p&gt;There&apos;s one thread that stands out as a suspect, as it&apos;s not in jbd2_journal_start but seems to be handling an unlink.  Perhaps it got stuck waiting on a semaphore while holding an open transaction with jbd2.  Its stack trace looks like this:&lt;/p&gt;

&lt;p&gt;COMMAND: &quot;mdt_152&quot;&lt;br/&gt;
schedule&lt;br/&gt;
rwsem_down_failed_common&lt;br/&gt;
rwsem_down_read_failed&lt;br/&gt;
call_rwsem_down_read_failed&lt;br/&gt;
llog_cat_current_log.clone.0&lt;br/&gt;
llog_cat_add_rec&lt;br/&gt;
llog_obd_origin_add&lt;br/&gt;
llog_add&lt;br/&gt;
lov_llog_origin_add&lt;br/&gt;
llog_add&lt;br/&gt;
mds_llog_origin_add&lt;br/&gt;
llog_add&lt;br/&gt;
mds_llog_add_unlink&lt;br/&gt;
mds_log_op_unlink&lt;br/&gt;
mdd_unlink_log&lt;br/&gt;
mdd_object_kill&lt;br/&gt;
mdd_finish_unlink&lt;br/&gt;
mdd_unlink&lt;br/&gt;
cml_unlink&lt;br/&gt;
mdt_reint_unlink&lt;br/&gt;
mdt_reint_rec&lt;br/&gt;
mdt_reint_internal&lt;br/&gt;
mdt_reint&lt;br/&gt;
mdt_handle_common&lt;br/&gt;
mdt_regular_handle&lt;br/&gt;
ptlrpc_main&lt;br/&gt;
kernel_thread&lt;/p&gt;</description>
                <environment>&lt;a href=&quot;https://github.com/chaos/lustre/commits/2.1.1-llnl&quot;&gt;https://github.com/chaos/lustre/commits/2.1.1-llnl&lt;/a&gt;</environment>
        <key id="13833">LU-1276</key>
            <summary>MDS threads all stuck in jbd2_journal_start</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.whamcloud.com/images/icons/priorities/critical.svg">Critical</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="5">Cannot Reproduce</resolution>
                                        <assignee username="green">Oleg Drokin</assignee>
                                    <reporter username="nedbass">Ned Bass</reporter>
                        <labels>
                    </labels>
                <created>Fri, 30 Mar 2012 20:56:36 +0000</created>
                <updated>Thu, 5 May 2016 02:56:10 +0000</updated>
                            <resolved>Tue, 30 Apr 2013 22:52:11 +0000</resolved>
                                    <version>Lustre 2.1.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                            <comments>
                            <comment id="33022" author="pjones" created="Fri, 30 Mar 2012 22:18:11 +0000"  >&lt;p&gt;Oleg&lt;/p&gt;

&lt;p&gt;Could you please look into this one?&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

&lt;p&gt;Peter&lt;/p&gt;</comment>
                            <comment id="34451" author="morrone" created="Tue, 10 Apr 2012 14:29:17 +0000"  >&lt;p&gt;Is this &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-81&quot; title=&quot;Some JBD2 journaling deadlock at BULL&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-81&quot;&gt;&lt;del&gt;LU-81&lt;/del&gt;&lt;/a&gt;, by any chance, or another bug?&lt;/p&gt;</comment>
                            <comment id="34468" author="green" created="Tue, 10 Apr 2012 15:18:28 +0000"  >&lt;p&gt;Yes, it does look like lu-81 from this information.&lt;/p&gt;

&lt;p&gt;Did you get another thread in a state like this also reported as hung?&lt;br/&gt;
PID: 26299 TASK: ffff88047d851620 CPU: 28 COMMAND: &quot;llog_process_th&quot;&lt;br/&gt;
#0 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65900&amp;#93;&lt;/span&gt; schedule at ffffffff81452851&lt;br/&gt;
#1 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a659c8&amp;#93;&lt;/span&gt; start_this_handle at ffffffffa08ec0d7&lt;br/&gt;
#2 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65a88&amp;#93;&lt;/span&gt; jbd2_journal_start at ffffffffa08ec520&lt;br/&gt;
#3 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65ad8&amp;#93;&lt;/span&gt; ldiskfs_journal_start_sb at ffffffffa0936fb8&lt;br/&gt;
#4 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65ae8&amp;#93;&lt;/span&gt; fsfilt_ldiskfs_write_record at ffffffffa098a0fc&lt;br/&gt;
#5 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65b68&amp;#93;&lt;/span&gt; llog_lvfs_write_blob at ffffffffa050917c&lt;br/&gt;
#6 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65c18&amp;#93;&lt;/span&gt; llog_lvfs_write_rec at ffffffffa050a722&lt;br/&gt;
#7 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65cf8&amp;#93;&lt;/span&gt; llog_cancel_rec at ffffffffa05010a4&lt;br/&gt;
#8 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65d58&amp;#93;&lt;/span&gt; llog_cat_cancel_records at ffffffffa0505de2&lt;br/&gt;
#9 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65de8&amp;#93;&lt;/span&gt; llog_changelog_cancel_cb at ffffffffa099ec12&lt;br/&gt;
#10 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65e68&amp;#93;&lt;/span&gt; llog_process_thread at ffffffffa0503573&lt;br/&gt;
#11 &lt;span class=&quot;error&quot;&gt;&amp;#91;ffff880998a65f48&amp;#93;&lt;/span&gt; kernel_thread at ffffffff8100d1aa&lt;/p&gt;</comment>
                            <comment id="34470" author="morrone" created="Tue, 10 Apr 2012 16:32:03 +0000"  >&lt;p&gt;I will check.&lt;/p&gt;</comment>
                            <comment id="34471" author="nedbass" created="Tue, 10 Apr 2012 16:36:41 +0000"  >&lt;p&gt;I checked and couldn&apos;t find any process in this state.&lt;/p&gt;</comment>
                            <comment id="34472" author="morrone" created="Tue, 10 Apr 2012 16:42:12 +0000"  >&lt;p&gt;It looks like the only thread that is in llog_cancel_rec is ldlm_cn_41 in this backtrace:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;schedule
start_this_handle
jbd2_journal_restart
ldiskfs_truncate_restart_trans
ldiskfs_ext_truncate
ldiskfs_truncate
ldiskfs_delete_inode
generic_delete_inode
generic_drop_inode
iput
mds_obd_destroy
llog_lvfs_destroy
llog_cancel_rec
llog_cat_cancel_records
llog_origin_handle_cancel
ldlm_cancel_handler
ptlrpc_main
kernel_thread
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="34476" author="green" created="Tue, 10 Apr 2012 17:18:16 +0000"  >&lt;p&gt;Yes, that does match the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-81&quot; title=&quot;Some JBD2 journaling deadlock at BULL&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-81&quot;&gt;&lt;del&gt;LU-81&lt;/del&gt;&lt;/a&gt; deadlock description.&lt;/p&gt;</comment>
                            <comment id="34479" author="morrone" created="Tue, 10 Apr 2012 18:25:39 +0000"  >&lt;p&gt;Ok, I have pulled the fix from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-81&quot; title=&quot;Some JBD2 journaling deadlock at BULL&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-81&quot;&gt;&lt;del&gt;LU-81&lt;/del&gt;&lt;/a&gt; into the 2.1.1-llnl branch.&lt;/p&gt;</comment>
                            <comment id="34776" author="pjones" created="Mon, 16 Apr 2012 08:21:53 +0000"  >&lt;p&gt;Believed to be a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-81&quot; title=&quot;Some JBD2 journaling deadlock at BULL&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-81&quot;&gt;&lt;del&gt;LU-81&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="43223" author="nedbass" created="Tue, 14 Aug 2012 17:37:13 +0000"  >&lt;p&gt;We had a repeat of this bug, even though we&apos;re running the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-81&quot; title=&quot;Some JBD2 journaling deadlock at BULL&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-81&quot;&gt;&lt;del&gt;LU-81&lt;/del&gt;&lt;/a&gt; patch.  We have a crash dump I can upload if anyone wants to take a look.&lt;/p&gt;

&lt;p&gt;There is one thread stuck in llog_cancel_rec()&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 5892   TASK: ffff880776d34040  CPU: 7   COMMAND: &quot;ldlm_cn_57&quot;
 #0 [ffff8807b50a94e0] schedule at ffffffff814ef052
 #1 [ffff8807b50a95a8] start_this_handle at ffffffffa0a9e072 [jbd2]
 #2 [ffff8807b50a9668] jbd2_journal_restart at ffffffffa0a9e3c1 [jbd2]
 #3 [ffff8807b50a96b8] ldiskfs_truncate_restart_trans at ffffffffa0ac28ba [ldiskfs]
 #4 [ffff8807b50a96e8] ldiskfs_clear_blocks at ffffffffa0ac7bfd [ldiskfs]
 #5 [ffff8807b50a9748] ldiskfs_free_data at ffffffffa0ac7de4 [ldiskfs]
 #6 [ffff8807b50a97a8] ldiskfs_free_branches at ffffffffa0ac8023 [ldiskfs]
 #7 [ffff8807b50a9808] ldiskfs_free_branches at ffffffffa0ac7f16 [ldiskfs]
 #8 [ffff8807b50a9868] ldiskfs_truncate at ffffffffa0ac8629 [ldiskfs]
 #9 [ffff8807b50a9988] ldiskfs_delete_inode at ffffffffa0ac99a0 [ldiskfs]
#10 [ffff8807b50a99a8] generic_delete_inode at ffffffff81192b6e
#11 [ffff8807b50a99d8] generic_drop_inode at ffffffff81192cc5
#12 [ffff8807b50a99f8] iput at ffffffff81191b12
#13 [ffff8807b50a9a18] mds_obd_destroy at ffffffffa0b90161 [mds]
#14 [ffff8807b50a9b58] llog_lvfs_destroy at ffffffffa05afd62 [obdclass]
#15 [ffff8807b50a9c28] llog_cancel_rec at ffffffffa05a7acf [obdclass]
#16 [ffff8807b50a9c58] llog_cat_cancel_records at ffffffffa05abfc1 [obdclass]
#17 [ffff8807b50a9cb8] llog_origin_handle_cancel at ffffffffa072cef6 [ptlrpc]
#18 [ffff8807b50a9db8] ldlm_cancel_handler at ffffffffa06fd2cf [ptlrpc]
#19 [ffff8807b50a9df8] ptlrpc_main at ffffffffa0724ad1 [ptlrpc]
#20 [ffff8807b50a9f48] kernel_thread at ffffffff8100c14a
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;One thread in llog_cat_add_rec():&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 5119   TASK: ffff880831f10080  CPU: 9   COMMAND: &quot;mdt_78&quot;
 #0 [ffff8806e6e7b568] schedule at ffffffff814ef052
 #1 [ffff8806e6e7b630] rwsem_down_failed_common at ffffffff814f1605
 #2 [ffff8806e6e7b690] rwsem_down_read_failed at ffffffff814f1796
 #3 [ffff8806e6e7b6d0] call_rwsem_down_read_failed at ffffffff81278d04
 #4 [ffff8806e6e7b738] llog_cat_current_log.clone.0 at ffffffffa05aa715 [obdclass]
 #5 [ffff8806e6e7b7d8] llog_cat_add_rec at ffffffffa05ac18a [obdclass]
 #6 [ffff8806e6e7b828] llog_obd_origin_add at ffffffffa05b09a6 [obdclass]
 #7 [ffff8806e6e7b858] llog_add at ffffffffa05b0b01 [obdclass]
 #8 [ffff8806e6e7b898] lov_llog_origin_add at ffffffffa091d0c4 [lov]
 #9 [ffff8806e6e7b918] llog_add at ffffffffa05b0b01 [obdclass]
#10 [ffff8806e6e7b958] mds_llog_origin_add at ffffffffa0b915d3 [mds]
#11 [ffff8806e6e7b9a8] llog_add at ffffffffa05b0b01 [obdclass]
#12 [ffff8806e6e7b9e8] mds_llog_add_unlink at ffffffffa0b91a95 [mds]
#13 [ffff8806e6e7ba38] mds_log_op_unlink at ffffffffa0b91f84 [mds]
#14 [ffff8806e6e7ba98] mdd_unlink_log at ffffffffa0bc0e4e [mdd]
#15 [ffff8806e6e7bac8] mdd_object_kill at ffffffffa0bb8242 [mdd]
#16 [ffff8806e6e7bb08] mdd_finish_unlink at ffffffffa0bcb5f6 [mdd]
#17 [ffff8806e6e7bb48] mdd_unlink at ffffffffa0bd010a [mdd]
#18 [ffff8806e6e7bc08] cml_unlink at ffffffffa0ce4e78 [cmm]
#19 [ffff8806e6e7bc58] mdt_reint_unlink at ffffffffa0c3aded [mdt]
#20 [ffff8806e6e7bce8] mdt_reint_rec at ffffffffa0c392a0 [mdt]
#21 [ffff8806e6e7bd18] mdt_reint_internal at ffffffffa0c34098 [mdt]
#22 [ffff8806e6e7bd68] mdt_reint at ffffffffa0c34354 [mdt]
#23 [ffff8806e6e7bd98] mdt_handle_common at ffffffffa0c287ad [mdt]
#24 [ffff8806e6e7bde8] mdt_regular_handle at ffffffffa0c29405 [mdt]
#25 [ffff8806e6e7bdf8] ptlrpc_main at ffffffffa0724ad1 [ptlrpc]
#26 [ffff8806e6e7bf48] kernel_thread at ffffffff8100c14a
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;


&lt;p&gt;And 115 threads stuck in jbd2_journal_start(), i.e.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 4075   TASK: ffff880231f93540  CPU: 15  COMMAND: &quot;mdt_01&quot;
 #0 [ffff8801018cf6a0] schedule at ffffffff814ef052
 #1 [ffff8801018cf768] start_this_handle at ffffffffa0a9e072 [jbd2]
 #2 [ffff8801018cf828] jbd2_journal_start at ffffffffa0a9e4f0 [jbd2]
 #3 [ffff8801018cf878] ldiskfs_journal_start_sb at ffffffffa0aec098 [ldiskfs]
 #4 [ffff8801018cf888] osd_trans_start at ffffffffa0cb5e0c [osd_ldiskfs]
 #5 [ffff8801018cf8c8] mdd_trans_start at ffffffffa0bd9753 [mdd]
 #6 [ffff8801018cf8e8] mdd_create at ffffffffa0bd086b [mdd]
 #7 [ffff8801018cfa28] cml_create at ffffffffa0ce54d8 [cmm]
 #8 [ffff8801018cfa78] mdt_reint_open at ffffffffa0c4c924 [mdt]
 #9 [ffff8801018cfb68] mdt_reint_rec at ffffffffa0c392a0 [mdt]
#10 [ffff8801018cfb98] mdt_reint_internal at ffffffffa0c34098 [mdt]
#11 [ffff8801018cfbe8] mdt_intent_reint at ffffffffa0c34555 [mdt]
#12 [ffff8801018cfc38] mdt_intent_policy at ffffffffa0c2fe29 [mdt]
#13 [ffff8801018cfc88] ldlm_lock_enqueue at ffffffffa06dfb42 [ptlrpc]
#14 [ffff8801018cfcf8] ldlm_handle_enqueue0 at ffffffffa06fe906 [ptlrpc]
#15 [ffff8801018cfd68] mdt_enqueue at ffffffffa0c2fa9a [mdt]
#16 [ffff8801018cfd98] mdt_handle_common at ffffffffa0c287ad [mdt]
#17 [ffff8801018cfde8] mdt_regular_handle at ffffffffa0c29405 [mdt]
#18 [ffff8801018cfdf8] ptlrpc_main at ffffffffa0724ad1 [ptlrpc]
#19 [ffff8801018cff48] kernel_thread at ffffffff8100c14a
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="43279" author="pjones" created="Wed, 15 Aug 2012 15:07:07 +0000"  >&lt;p&gt;Reopening ticket - not a duplicate of LU81&lt;/p&gt;</comment>
                            <comment id="43514" author="green" created="Mon, 20 Aug 2012 17:54:30 +0000"  >&lt;p&gt;I guess the other candidate for this issue is &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1648&quot; title=&quot;MDS Crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1648&quot;&gt;&lt;del&gt;LU-1648&lt;/del&gt;&lt;/a&gt;, can you add a patch from it as well please?&lt;/p&gt;</comment>
                            <comment id="48462" author="morrone" created="Tue, 27 Nov 2012 21:57:27 +0000"  >&lt;p&gt;It sounds like the patch from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1648&quot; title=&quot;MDS Crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1648&quot;&gt;&lt;del&gt;LU-1648&lt;/del&gt;&lt;/a&gt; is needed on b2_1.&lt;/p&gt;</comment>
                            <comment id="57379" author="morrone" created="Tue, 30 Apr 2013 22:00:21 +0000"  >&lt;p&gt;It looks like the &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1648&quot; title=&quot;MDS Crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1648&quot;&gt;&lt;del&gt;LU-1648&lt;/del&gt;&lt;/a&gt; fix landed on before 2.1.4 in &lt;a href=&quot;http://review.whamcloud.com/4743&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;change 4743&lt;/a&gt;.  I think we can close this until we see it again.&lt;/p&gt;</comment>
                            <comment id="57392" author="pjones" created="Tue, 30 Apr 2013 22:52:11 +0000"  >&lt;p&gt;ok thanks Chris&lt;/p&gt;</comment>
                            <comment id="78120" author="patrick.valentin" created="Fri, 28 Feb 2014 18:41:29 +0000"  >&lt;p&gt;One of the Bull customers (TGCC) had the same deadlock twice during the past six months: one thread is stuck in jbd2_journal_commit_transaction() and many other thread are stuck in  jbd2_journal_start().&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 29225 TASK: ffff88107c3bb040 CPU: 15 COMMAND: &quot;jbd2/dm-2-8&quot;
 #0 [ffff88107a343c60] schedule at ffffffff81485765
 0000001 [ffff88107a343d28] jbd2_journal_commit_transaction at ffffffffa006a94f [jbd2]
 0000002 [ffff88107a343e68] kjournald2 at ffffffffa0070c08 [jbd2]
 0000003 [ffff88107a343ee8] kthread at ffffffff8107b5f6
 0000004 [ffff88107a343f48] kernel_thread at ffffffff8100412a
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and most of the threads:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;PID: 15585 TASK: ffff88063062a790 CPU: 0 COMMAND: &quot;mdt_503&quot;
PID: 15586 TASK: ffff88063062a040 CPU: 23 COMMAND: &quot;mdt_504&quot;
PID: 15587 TASK: ffff88020f3ad7d0 CPU: 30 COMMAND: &quot;mdt_505&quot;
PID: 29286 TASK: ffff88087505e790 CPU: 25 COMMAND: &quot;mdt_01&quot;
...
#0 [ffff881949c078f0] schedule at ffffffff81485765
0000001 [ffff881949c079b8] start_this_handle at ffffffffa006908a [jbd2]
0000002 [ffff881949c07a78] jbd2_journal_start at ffffffffa0069500 [jbd2]
0000003 [ffff881949c07ac8] ldiskfs_journal_start_sb at ffffffffa0451ca8 [ldiskfs]
0000004 [ffff881949c07ad8] osd_trans_start at ffffffffa0d4a324 [osd_ldiskfs]
0000005 [ffff881949c07b18] mdd_trans_start at ffffffffa0c4c4e3 [mdd]
0000006 [ffff881949c07b38] mdd_unlink at ffffffffa0c401eb [mdd]
0000007 [ffff881949c07bf8] cml_unlink at ffffffffa0d82e07 [cmm]
0000008 [ffff881949c07c38] mdt_reint_unlink at ffffffffa0cba0f4 [mdt]
0000009 [ffff881949c07cb8] mdt_reint_rec at ffffffffa0cb7cb1 [mdt]
0000010 [ffff881949c07cd8] mdt_reint_internal at ffffffffa0caeed4 [mdt]
0000011 [ffff881949c07d28] mdt_reint at ffffffffa0caf2b4 [mdt]
0000012 [ffff881949c07d48] mdt_handle_common at ffffffffa0ca3762 [mdt]
0000013 [ffff881949c07d98] mdt_regular_handle at ffffffffa0ca4655 [mdt]
0000014 [ffff881949c07da8] ptlrpc_main at ffffffffa071f4f6 [ptlrpc]
0000015 [ffff881949c07f48] kernel_thread at ffffffff8100412a
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;The are running lustre 2.1.6 which contains &lt;a href=&quot;http://review.whamcloud.com/4743&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/4743&lt;/a&gt; from &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-1648&quot; title=&quot;MDS Crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-1648&quot;&gt;&lt;del&gt;LU-1648&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I attach two files containing the dmseg and the crash back trace of all threads.&lt;br/&gt;
could you reopen this ticket, as it was closed with &quot;Cannot Reproduce&quot;. &lt;/p&gt;
</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                                                <inwardlinks description="is duplicated by">
                                                        </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="14190" name="bt-all.merged.txt" size="235103" author="patrick.valentin" created="Fri, 28 Feb 2014 18:53:58 +0000"/>
                            <attachment id="14189" name="dmesg.txt" size="128208" author="patrick.valentin" created="Fri, 28 Feb 2014 18:53:58 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvpt3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>8047</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>