<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:06:29 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7157] sanity test_27z: cfs_hash_bd_del_locked()) ASSERTION( bd-&gt;bd_bucket-&gt;hsb_count &gt; 0 ) failed</title>
                <link>https://jira.whamcloud.com/browse/LU-7157</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;This issue was created by maloo for Bob Glossman &amp;lt;bob.glossman@intel.com&amp;gt;&lt;/p&gt;

&lt;p&gt;This issue relates to the following test suite run: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/ccc00f28-5b17-11e5-af09-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/ccc00f28-5b17-11e5-af09-5254006e85c2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The sub-test test_27z failed with the following error:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;test failed to respond and timed out
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I think an OST crashed &amp;amp; rebooted during test 27z, but I&apos;m not sure.  no console logs were captured.  console logs might have given better clues.&lt;/p&gt;

&lt;p&gt;Info required for matching: sanity 27z&lt;/p&gt;</description>
                <environment></environment>
        <key id="32109">LU-7157</key>
            <summary>sanity test_27z: cfs_hash_bd_del_locked()) ASSERTION( bd-&gt;bd_bucket-&gt;hsb_count &gt; 0 ) failed</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="1" iconUrl="https://jira.whamcloud.com/images/icons/statuses/open.png" description="The issue is open and ready for the assignee to start work on it.">Open</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="maloo">Maloo</reporter>
                        <labels>
                    </labels>
                <created>Mon, 14 Sep 2015 19:48:27 +0000</created>
                <updated>Thu, 23 Nov 2017 17:56:18 +0000</updated>
                                            <version>Lustre 2.8.0</version>
                                                        <due></due>
                            <votes>0</votes>
                                    <watches>8</watches>
                                                                            <comments>
                            <comment id="127282" author="bogl" created="Mon, 14 Sep 2015 20:20:50 +0000"  >&lt;p&gt;I think the missing console logs have been misplaced onto lustre-init, as has been seen before on el7 test runs.  This isn&apos;t el7, it&apos;s sles11sp4.  However the same thing may be happening here.&lt;/p&gt;

&lt;p&gt;If I look at the OST console log recorded in lustre-init I do in fact see a panic:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;16:57:44:Welcome to SUSE Linux Enterprise Server 11 SP4  (x86_64) - Kernel 3.0.1
01-63_lustre.g031dbf9-default (console).16:57:44:
16:57:44:
16:57:44:shadow-7vm11 login: [ 1817.654651] LustreError: 17681:0:(hash.c:554:cfs_hash_bd_del_locked()) ASSERTION( bd-&amp;gt;bd_bucket-&amp;gt;hsb_count &amp;gt; 0 ) failed:
16:57:44:[ 1817.658147] LustreError: 17681:0:(hash.c:554:cfs_hash_bd_del_locked()) LBUG
16:57:44:[ 1817.661508] Kernel panic - not syncing: LBUG
16:57:44:[ 1817.662478] Pid: 17681, comm: umount Tainted: G           EN  3.0.101-63_lustre.g031dbf9-default #1
16:57:44:[ 1817.664474] Call Trace:
16:57:44:[ 1817.665214]  [&amp;lt;ffffffff81004b95&amp;gt;] dump_trace+0x75/0x300
16:57:44:[ 1817.666278]  [&amp;lt;ffffffff81466093&amp;gt;] dump_stack+0x69/0x6f
16:57:44:[ 1817.667349]  [&amp;lt;ffffffff8146612c&amp;gt;] panic+0x93/0x201
16:57:44:[ 1817.669684]  [&amp;lt;ffffffffa0726db3&amp;gt;] lbug_with_loc+0xa3/0xb0 [libcfs]
16:57:44:[ 1817.672157]  [&amp;lt;ffffffffa0737ccd&amp;gt;] cfs_hash_bd_del_locked+0xdd/0x120 [libcfs]
16:57:44:[ 1817.675226]  [&amp;lt;ffffffffa0a6516e&amp;gt;] __ldlm_resource_putref_final+0x3e/0xc0 [ptlrpc]
16:57:44:[ 1817.679206]  [&amp;lt;ffffffffa0a652d2&amp;gt;] ldlm_resource_putref_locked+0xe2/0x3f0 [ptlrpc]
16:57:44:[ 1817.681626]  [&amp;lt;ffffffffa073852a&amp;gt;] cfs_hash_for_each_relax+0x1da/0x330 [libcfs]
16:57:44:[ 1817.683368]  [&amp;lt;ffffffffa073a6ba&amp;gt;] cfs_hash_for_each_nolock+0x7a/0x1e0 [libcfs]
16:57:44:[ 1817.685157]  [&amp;lt;ffffffffa0a63be9&amp;gt;] ldlm_namespace_cleanup+0x29/0xb0 [ptlrpc]
16:57:44:[ 1817.686515]  [&amp;lt;ffffffffa0a660f2&amp;gt;] __ldlm_namespace_free+0x52/0x580 [ptlrpc]
16:57:44:[ 1817.687817]  [&amp;lt;ffffffffa0a66682&amp;gt;] ldlm_namespace_free_prior+0x62/0x230 [ptlrpc]
16:57:44:[ 1817.689594]  [&amp;lt;ffffffffa0fff0a8&amp;gt;] ofd_fini+0x58/0x190 [ofd]
16:57:44:[ 1817.690725]  [&amp;lt;ffffffffa0fff211&amp;gt;] ofd_device_fini+0x31/0xf0 [ofd]
16:57:44:[ 1817.692021]  [&amp;lt;ffffffffa086872d&amp;gt;] class_cleanup+0x9bd/0xd40 [obdclass]
16:57:44:[ 1817.694026]  [&amp;lt;ffffffffa0869c91&amp;gt;] class_process_config+0x11e1/0x1910 [obdclass]
16:57:44:[ 1817.698374]  [&amp;lt;ffffffffa086a8bf&amp;gt;] class_manual_cleanup+0x4ff/0x8c0 [obdclass]
16:57:44:[ 1817.701394]  [&amp;lt;ffffffffa08a6477&amp;gt;] server_put_super+0x607/0xb00 [obdclass]
16:57:44:[ 1817.702686]  [&amp;lt;ffffffff811603fb&amp;gt;] generic_shutdown_super+0x6b/0x100
16:57:44:[ 1817.703907]  [&amp;lt;ffffffff81160519&amp;gt;] kill_anon_super+0x9/0x20
16:57:44:[ 1817.705052]  [&amp;lt;ffffffff81160b83&amp;gt;] deactivate_locked_super+0x33/0x90
16:57:44:[ 1817.706654]  [&amp;lt;ffffffff8117cc0c&amp;gt;] sys_umount+0x6c/0xd0
16:57:44:[ 1817.707718]  [&amp;lt;ffffffff814710f2&amp;gt;] system_call_fastpath+0x16/0x1b
16:57:44:[ 1817.708906]  [&amp;lt;00007fd1fa0b16f7&amp;gt;] 0x7fd1fa0b16f6
16:57:44:[    0.000000] Initializing cgroup subsys cpuset
16:57:44:[    0.000000] Initializing cgroup subsys cpu
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;hoping this will give somebody a clue.&lt;/p&gt;</comment>
                            <comment id="129664" author="bzzz" created="Wed, 7 Oct 2015 06:32:53 +0000"  >&lt;p&gt;&lt;a href=&quot;https://testing.hpdd.intel.com/test_logs/959bcc54-6bba-11e5-8e3b-5254006e85c2/show_text&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_logs/959bcc54-6bba-11e5-8e3b-5254006e85c2/show_text&lt;/a&gt; - not exactly the same, but the same test and at umount.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;13:06:41:Lustre: DEBUG MARKER: umount -d /mnt/ost1
13:06:41:Lustre: Failing over lustre-OST0000
13:06:41:Lustre: lustre-OST0000: Not available for connect from 10.1.4.19@tcp (stopping)
13:06:41:LustreError: 6276:0:(genops.c:815:class_export_put()) ASSERTION( __v &amp;gt; 0 &amp;amp;&amp;amp; __v &amp;lt; ((int)0x5a5a5a5a5a5a5a5a) ) failed: value: 0
13:06:41:LustreError: 6276:0:(genops.c:815:class_export_put()) LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="132843" author="jamesanunez" created="Fri, 6 Nov 2015 16:05:02 +0000"  >&lt;p&gt;Similar failure on OST unmount on master at&lt;br/&gt;
2015-11-05 00:43:03 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/008f7a1e-839f-11e5-b1ba-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/008f7a1e-839f-11e5-b1ba-5254006e85c2&lt;/a&gt;&lt;br/&gt;
2015-11-11 19:37:53 - &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/fa66faee-88ef-11e5-8ba4-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/fa66faee-88ef-11e5-8ba4-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="141959" author="adilger" created="Thu, 11 Feb 2016 12:31:28 +0000"  >&lt;p&gt;Recent failures of this test:&lt;br/&gt;
&lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/260b405e-cf8d-11e5-9923-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/260b405e-cf8d-11e5-9923-5254006e85c2&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;09:37:21:Lustre: Failing over lustre-OST0000
09:37:21:general protection fault: 0000 [#1] SMP 
09:37:21:Pid: 10695, comm: umount Not tainted 2.6.32-573.12.1.el6_lustre.gd68d18b.x86_64 #1 Red Hat KVM
09:37:21:RIP: 0010:[&amp;lt;ffffffffa076151b&amp;gt;]  [&amp;lt;ffffffffa076151b&amp;gt;] ldlm_resource_putref_locked+0x1b/0x3f0 [ptlrpc]
09:37:21:Process umount (pid: 10695, threadinfo ffff88005b4d0000, task ffff88005cac8040)
09:37:21:Call Trace:
09:37:21: [&amp;lt;ffffffffa0761902&amp;gt;] ldlm_res_hop_put_locked+0x12/0x20 [ptlrpc]
09:37:21: [&amp;lt;ffffffffa0478779&amp;gt;] cfs_hash_for_each_relax+0x199/0x350 [libcfs]
09:37:21: [&amp;lt;ffffffffa047a6ac&amp;gt;] cfs_hash_for_each_nolock+0x8c/0x1d0 [libcfs]
09:37:21: [&amp;lt;ffffffffa075ff30&amp;gt;] ldlm_namespace_cleanup+0x30/0xc0 [ptlrpc]
09:37:21: [&amp;lt;ffffffffa0762444&amp;gt;] __ldlm_namespace_free+0x54/0x560 [ptlrpc]
09:37:21: [&amp;lt;ffffffffa07629bf&amp;gt;] ldlm_namespace_free_prior+0x6f/0x220 [ptlrpc]
09:37:21: [&amp;lt;ffffffffa0de85bb&amp;gt;] ofd_device_fini+0x7b/0x260 [ofd]
09:37:21: [&amp;lt;ffffffffa056f282&amp;gt;] class_cleanup+0x572/0xd20 [obdclass]
09:37:21: [&amp;lt;ffffffffa0571906&amp;gt;] class_process_config+0x1ed6/0x2830 [obdclass]
09:37:21: [&amp;lt;ffffffffa057271f&amp;gt;] class_manual_cleanup+0x4bf/0x8e0 [obdclass]
09:37:21: [&amp;lt;ffffffffa05aae3c&amp;gt;] server_put_super+0xa0c/0xed0 [obdclass]
09:37:21: [&amp;lt;ffffffff811944bb&amp;gt;] generic_shutdown_super+0x5b/0xe0
09:37:21: [&amp;lt;ffffffff811945a6&amp;gt;] kill_anon_super+0x16/0x60
09:37:21: [&amp;lt;ffffffffa05755d6&amp;gt;] lustre_kill_super+0x36/0x60 [obdclass]
09:37:21: [&amp;lt;ffffffff81194d47&amp;gt;] deactivate_super+0x57/0x80
09:37:21: [&amp;lt;ffffffff811b4d3f&amp;gt;] mntput_no_expire+0xbf/0x110
09:37:21: [&amp;lt;ffffffff811b588b&amp;gt;] sys_umount+0x7b/0x3a0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;and &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/008f7a1e-839f-11e5-b1ba-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/008f7a1e-839f-11e5-b1ba-5254006e85c2&lt;/a&gt;&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;01:11:57:Lustre: DEBUG MARKER: umount -d /mnt/ost4
01:11:57:LustreError: 15917:0:(hash.c:554:cfs_hash_bd_del_locked()) ASSERTION( bd-&amp;gt;bd_bucket-&amp;gt;hsb_count &amp;gt; 0 ) failed: 
01:11:57:LustreError: 15917:0:(hash.c:554:cfs_hash_bd_del_locked()) LBUG
01:11:57:Pid: 15917, comm: umount
01:11:57:Call Trace:
01:11:57: [&amp;lt;ffffffffa046c875&amp;gt;] libcfs_debug_dumpstack+0x55/0x80 [libcfs]
01:11:57: [&amp;lt;ffffffffa046ce77&amp;gt;] lbug_with_loc+0x47/0xb0 [libcfs]
01:11:57: [&amp;lt;ffffffffa047de60&amp;gt;] cfs_hash_bd_del_locked+0xc0/0x100 [libcfs]
01:11:57: [&amp;lt;ffffffffa07663d8&amp;gt;] __ldlm_resource_putref_final+0x48/0xc0 [ptlrpc]
01:11:57: [&amp;lt;ffffffffa076652d&amp;gt;] ldlm_resource_putref_locked+0xdd/0x3f0 [ptlrpc]
01:11:57: [&amp;lt;ffffffffa0766852&amp;gt;] ldlm_res_hop_put_locked+0x12/0x20 [ptlrpc]
01:11:57: [&amp;lt;ffffffffa05782cf&amp;gt;] ? class_manual_cleanup+0x4bf/0x8e0 [obdclass]
01:11:57: [&amp;lt;ffffffffa05559f6&amp;gt;] ? class_name2dev+0x56/0xe0 [obdclass]
01:11:57: [&amp;lt;ffffffffa05b019c&amp;gt;] ? server_put_super+0xa0c/0xed0 [obdclass]
01:11:57: [&amp;lt;ffffffff811b0116&amp;gt;] ? invalidate_inodes+0xf6/0x190
01:11:57: [&amp;lt;ffffffff8119437b&amp;gt;] ? generic_shutdown_super+0x5b/0xe0
01:11:57: [&amp;lt;ffffffff81194466&amp;gt;] ? kill_anon_super+0x16/0x60
01:11:57: [&amp;lt;ffffffffa057b186&amp;gt;] ? lustre_kill_super+0x36/0x60 [obdclass]
01:11:57: [&amp;lt;ffffffff81194c07&amp;gt;] ? deactivate_super+0x57/0x80
01:11:57: [&amp;lt;ffffffff811b4a7f&amp;gt;] ? mntput_no_expire+0xbf/0x110
01:11:57: [&amp;lt;ffffffff811b55cb&amp;gt;] ? sys_umount+0x7b/0x3a0
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="158579" author="adilger" created="Wed, 13 Jul 2016 03:22:36 +0000"  >&lt;p&gt;Another failure: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/0cc7cb54-4872-11e6-8968-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/0cc7cb54-4872-11e6-8968-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="158628" author="jhammond" created="Wed, 13 Jul 2016 16:10:10 +0000"  >&lt;p&gt;Andreas, this issue is an oops, your link is for a softlockup. See &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-8392&quot; title=&quot;sanity test_27z: soft lockup - CPU#0 stuck for 22s! [ptlrpcd_rcv:6145]&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-8392&quot;&gt;&lt;del&gt;LU-8392&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="167421" author="niu" created="Tue, 27 Sep 2016 05:32:41 +0000"  >&lt;p&gt;Another failure: &lt;a href=&quot;https://testing.hpdd.intel.com/test_sets/fb94d724-8420-11e6-a35f-5254006e85c2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://testing.hpdd.intel.com/test_sets/fb94d724-8420-11e6-a35f-5254006e85c2&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="168182" author="green" created="Tue, 4 Oct 2016 15:07:00 +0000"  >&lt;p&gt;This last one seems to be a different failure that&apos;s worth filing a separate ticket for&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxnjj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>