<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 02:07:40 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-7297] BUG: spinlock bad magic, probably on oh-&gt;oh_lock</title>
                <link>https://jira.whamcloud.com/browse/LU-7297</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;After building lustre, attempted to run in-tree on a single node.  Ran&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;FSTYPE=zfs llmount.sh
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;mkfs.lustre completed without errors.&lt;br/&gt;
mount produced following output in dmesg:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: Lustre: Build Version: v2_7_54_0-g99fd511-CHANGED-2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug
LNet: Added LNI 192.168.122.12@tcp [8/256/0/180]
LNet: Accept secure, port 988
Lustre: Echo OBD driver; http://www.lustre.org/
zpool used greatest stack depth: 2928 bytes left
txg_sync used greatest stack depth: 2872 bytes left
BUG: spinlock bad magic on CPU#0, mount.lustre/4152 (Not tainted)
lock: ffff88002713a770, .magic: 00000000, .owner: mount.lustre/4152, .owner_cpu: 0
Pid: 4152, comm: mount.lustre Not tainted 2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug #1
Call Trace:
[&amp;lt;ffffffff812c3d2a&amp;gt;] ? spin_bug+0xaa/0x100
[&amp;lt;ffffffff812c3de5&amp;gt;] ? _raw_spin_unlock+0x65/0xa0
[&amp;lt;ffffffff81561a4b&amp;gt;] ? _spin_unlock+0x2b/0x40
[&amp;lt;ffffffffa0a8d32c&amp;gt;] ? lprocfs_oh_tally+0x3c/0x50 [obdclass]
[&amp;lt;ffffffffa0e54319&amp;gt;] ? record_start_io+0x39/0x90 [osd_zfs]
[&amp;lt;ffffffffa0e5601d&amp;gt;] ? osd_write+0x1ad/0x3a0 [osd_zfs]
[&amp;lt;ffffffffa0ab747d&amp;gt;] ? dt_record_write+0x3d/0x130 [obdclass]
[&amp;lt;ffffffffa0a97895&amp;gt;] ? local_oid_storage_init+0xe55/0x1410 [obdclass]
[&amp;lt;ffffffffa11226a4&amp;gt;] ? mgs_fs_setup+0xa4/0x4b0 [mgs]
[&amp;lt;ffffffff8156190b&amp;gt;] ? _read_unlock+0x2b/0x40
[&amp;lt;ffffffffa1121aaf&amp;gt;] ? mgs_init0+0xeff/0x17c0 [mgs]
[&amp;lt;ffffffff8118f215&amp;gt;] ? kmem_cache_alloc_trace+0x1c5/0x2e0
[&amp;lt;ffffffff81545b30&amp;gt;] ? kmemleak_alloc+0x20/0xd0
[&amp;lt;ffffffffa111a399&amp;gt;] ? mgs_type_start+0x19/0x20 [mgs]
[&amp;lt;ffffffffa1122480&amp;gt;] ? mgs_device_alloc+0x110/0x1f0 [mgs]
[&amp;lt;ffffffffa0a9d19f&amp;gt;] ? obd_setup+0x1bf/0x290 [obdclass]
[&amp;lt;ffffffffa0a9d477&amp;gt;] ? class_setup+0x207/0x870 [obdclass]
[&amp;lt;ffffffffa0aa4bfc&amp;gt;] ? class_process_config+0x113c/0x2710 [obdclass]
[&amp;lt;ffffffff8118c983&amp;gt;] ? cache_alloc_debugcheck_after+0xf3/0x230
[&amp;lt;ffffffff81545b30&amp;gt;] ? kmemleak_alloc+0x20/0xd0
[&amp;lt;ffffffff8118ffdb&amp;gt;] ? __kmalloc+0x21b/0x330
[&amp;lt;ffffffffa0aaaf98&amp;gt;] ? do_lcfg+0x198/0x9c0 [obdclass]
[&amp;lt;ffffffffa0aab422&amp;gt;] ? do_lcfg+0x622/0x9c0 [obdclass]
[&amp;lt;ffffffffa0aab854&amp;gt;] ? lustre_start_simple+0x94/0x200 [obdclass]
[&amp;lt;ffffffffa0ae0ae1&amp;gt;] ? server_fill_super+0x1161/0x1690 [obdclass]
[&amp;lt;ffffffffa0ab0c58&amp;gt;] ? lustre_fill_super+0x5d8/0xa80 [obdclass]
[&amp;lt;ffffffffa0ab0680&amp;gt;] ? lustre_fill_super+0x0/0xa80 [obdclass]
[&amp;lt;ffffffff811af06f&amp;gt;] ? get_sb_nodev+0x5f/0xa0
[&amp;lt;ffffffffa0aa8345&amp;gt;] ? lustre_get_sb+0x25/0x30 [obdclass]
[&amp;lt;ffffffff811ae61b&amp;gt;] ? vfs_kern_mount+0x7b/0x1b0
[&amp;lt;ffffffff811ae7c2&amp;gt;] ? do_kern_mount+0x52/0x130
[&amp;lt;ffffffff811d0b0b&amp;gt;] ? do_mount+0x2fb/0x920
[&amp;lt;ffffffff811d11c0&amp;gt;] ? sys_mount+0x90/0xe0
[&amp;lt;ffffffff8100b072&amp;gt;] ? system_call_fastpath+0x16/0x1b
Lustre: Setting parameter lustre-MDT0000.mdt.identity_upcall in log lustre-MDT0000
Lustre: ctl-lustre-MDT0000: No data found on store. Initialize space
Lustre: lustre-MDT0000: new disk, initializing
mount.lustre used greatest stack depth: 2536 bytes left
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Code that triggered the BUG:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;void lprocfs_oh_tally(struct obd_histogram *oh, unsigned int value)
{
        if (value &amp;gt;= OBD_HIST_MAX)
                value = OBD_HIST_MAX - 1;

        spin_lock(&amp;amp;oh-&amp;gt;oh_lock);
        oh-&amp;gt;oh_buckets[value]++;
        spin_unlock(&amp;amp;oh-&amp;gt;oh_lock);
}
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>Linux kernel 2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug&lt;br/&gt;
Lustre 2.7.54 plus 3 patches, see description</environment>
        <key id="32643">LU-7297</key>
            <summary>BUG: spinlock bad magic, probably on oh-&gt;oh_lock</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="1">Fixed</resolution>
                                        <assignee username="emoly.liu">Emoly Liu</assignee>
                                    <reporter username="ofaaland">Olaf Faaland</reporter>
                        <labels>
                            <label>llnl</label>
                            <label>patch</label>
                    </labels>
                <created>Wed, 14 Oct 2015 17:44:24 +0000</created>
                <updated>Fri, 4 Dec 2015 18:22:52 +0000</updated>
                            <resolved>Fri, 4 Dec 2015 18:22:52 +0000</resolved>
                                                    <fixVersion>Lustre 2.8.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>4</watches>
                                                                            <comments>
                            <comment id="130402" author="ofaaland" created="Wed, 14 Oct 2015 17:45:25 +0000"  >
&lt;p&gt;Lustre involved is:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;* 99fd511 LU-6765 obdecho: initialize cs_lu.ls_purge_mutex
* 4a604d2 LU-6816 utils: remove libzfs_load_module() call
* 4dbb82b LU-6747 osd-zfs: initialize obd_statfs in osd_statfs()
* d20d17e New tag 2.7.54
* 2b5ebbb LU-6599 header: Change erroneous GPLv3 header to GPLv2
* c6aab2c LU-6068 misc: update old URLs to hpdd.intel.com
* 8badb39 LU-6389 llite: restart short read/write for normal IO
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I will check with current master and report back in a comment whether the issue has already been resolved and ticket can be closed, or not.&lt;/p&gt;</comment>
                            <comment id="130405" author="ofaaland" created="Wed, 14 Oct 2015 17:56:33 +0000"  >&lt;p&gt;Running under a kernel with lock debug features enabled.&lt;/p&gt;</comment>
                            <comment id="130409" author="ofaaland" created="Wed, 14 Oct 2015 18:05:41 +0000"  >&lt;p&gt;Rebuild from current master and problem does not occur.  False alarm, sorry.  Ticket should be closed.&lt;/p&gt;</comment>
                            <comment id="130417" author="jgmitter" created="Wed, 14 Oct 2015 18:18:32 +0000"  >&lt;p&gt;Thanks Olaf, we will close the ticket.&lt;/p&gt;</comment>
                            <comment id="130452" author="ofaaland" created="Wed, 14 Oct 2015 22:36:09 +0000"  >&lt;p&gt;I find it does occur with current master.  Kernel doesn&apos;t report it every time.  &lt;/p&gt;

&lt;p&gt;The BUG report that occurs with current master Lustre is&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;BUG: spinlock bad magic on CPU#0, mount.lustre/4407 (Not tainted)
lock: ffff880028996770, .magic: 00000000, .owner: mount.lustre/4407, .owner_cpu: 0
Pid: 4407, comm: mount.lustre Not tainted 2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug #1
Call Trace:
[&amp;lt;ffffffff812c3d2a&amp;gt;] ? spin_bug+0xaa/0x100
[&amp;lt;ffffffff812c3de5&amp;gt;] ? _raw_spin_unlock+0x65/0xa0
[&amp;lt;ffffffff81561a4b&amp;gt;] ? _spin_unlock+0x2b/0x40
[&amp;lt;ffffffffa0a8f41c&amp;gt;] ? lprocfs_oh_tally+0x3c/0x50 [obdclass]
[&amp;lt;ffffffffa0e8e869&amp;gt;] ? record_start_io+0x39/0x90 [osd_zfs]
[&amp;lt;ffffffffa0e908ad&amp;gt;] ? osd_write+0x1ad/0x3a0 [osd_zfs]
[&amp;lt;ffffffffa0aba6bd&amp;gt;] ? dt_record_write+0x3d/0x130 [obdclass]
[&amp;lt;ffffffffa0a99f34&amp;gt;] ? local_oid_storage_init+0xeb4/0x14a0 [obdclass]
[&amp;lt;ffffffffa11495c4&amp;gt;] ? mgs_fs_setup+0xa4/0x4b0 [mgs]
[&amp;lt;ffffffff8156190b&amp;gt;] ? _read_unlock+0x2b/0x40
[&amp;lt;ffffffffa1148a8f&amp;gt;] ? mgs_init0+0xecf/0x1790 [mgs]
[&amp;lt;ffffffff81545b30&amp;gt;] ? kmemleak_alloc+0x20/0xd0
[&amp;lt;ffffffffa11416b9&amp;gt;] ? mgs_type_start+0x19/0x20 [mgs]
[&amp;lt;ffffffffa11493e8&amp;gt;] ? mgs_device_alloc+0x98/0x140 [mgs]
[&amp;lt;ffffffffa0a9f42f&amp;gt;] ? obd_setup+0x1bf/0x290 [obdclass]
[&amp;lt;ffffffffa0a9f758&amp;gt;] ? class_setup+0x258/0x930 [obdclass]
[&amp;lt;ffffffffa0aa6011&amp;gt;] ? class_process_config+0x1151/0x26d0 [obdclass]
[&amp;lt;ffffffff8118c983&amp;gt;] ? cache_alloc_debugcheck_after+0xf3/0x230
[&amp;lt;ffffffff81545b30&amp;gt;] ? kmemleak_alloc+0x20/0xd0
[&amp;lt;ffffffff8118ffdb&amp;gt;] ? __kmalloc+0x21b/0x330
[&amp;lt;ffffffffa0ab1198&amp;gt;] ? do_lcfg+0x198/0xb60 [obdclass]
[&amp;lt;ffffffffa0ab12cb&amp;gt;] ? do_lcfg+0x2cb/0xb60 [obdclass]
[&amp;lt;ffffffffa0ab1bf4&amp;gt;] ? lustre_start_simple+0x94/0x200 [obdclass]
[&amp;lt;ffffffffa0ae206e&amp;gt;] ? server_fill_super+0x115e/0x1688 [obdclass]
[&amp;lt;ffffffffa0ab3e28&amp;gt;] ? lustre_fill_super+0x338/0x990 [obdclass]
[&amp;lt;ffffffffa0ab3af0&amp;gt;] ? lustre_fill_super+0x0/0x990 [obdclass]
[&amp;lt;ffffffff811af06f&amp;gt;] ? get_sb_nodev+0x5f/0xa0
[&amp;lt;ffffffffa0aab195&amp;gt;] ? lustre_get_sb+0x25/0x30 [obdclass]
[&amp;lt;ffffffff811ae61b&amp;gt;] ? vfs_kern_mount+0x7b/0x1b0
[&amp;lt;ffffffff811ae7c2&amp;gt;] ? do_kern_mount+0x52/0x130
[&amp;lt;ffffffff811d0b0b&amp;gt;] ? do_mount+0x2fb/0x920
[&amp;lt;ffffffff811d11c0&amp;gt;] ? sys_mount+0x90/0xe0
[&amp;lt;ffffffff8100b072&amp;gt;] ? system_call_fastpath+0x16/0x1b
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;record_start_io() calls lprocfs_oh_tally with osd-&amp;gt;od_brw_stats.hist.&lt;br/&gt;
I don&apos;t see that od_brw_stats.hist&lt;span class=&quot;error&quot;&gt;&amp;#91;i&amp;#93;&lt;/span&gt;.oh_lock are initialized in the ZFS osd.&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[faaland1@hefe branch:follow_master lustre] $git grep &apos;spin_lock_init.*oh_lock&apos;
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_read_rpc_hist.oh_lock);
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_write_rpc_hist.oh_lock);
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_read_page_hist.oh_lock);
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_write_page_hist.oh_lock);
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_read_offset_hist.oh_lock);
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_write_offset_hist.oh_lock);
lustre/ldlm/ldlm_lib.c: spin_lock_init(&amp;amp;cli-&amp;gt;cl_mod_rpcs_hist.oh_lock);
lustre/mdt/mdt_lproc.c:         spin_lock_init(&amp;amp;mdt-&amp;gt;mdt_rename_stats.hist[i].oh_lock);
lustre/osd-ldiskfs/osd_lproc.c:         spin_lock_init(&amp;amp;osd-&amp;gt;od_brw_stats.hist[i].oh_lock);
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The ldiskfs OSD initializes the locks via&lt;br/&gt;
osd_device_init-&amp;gt;osd_lprocfs_init-&amp;gt;osd_stats_init-&amp;gt;spin_lock_init&lt;/p&gt;

&lt;p&gt;But the zfs OSD returns from osd_device_init() immediately without doing anything.    It&apos;s not clear to me where it should be initialized, if that&apos;s not the place.&lt;/p&gt;</comment>
                            <comment id="130534" author="jgmitter" created="Thu, 15 Oct 2015 17:13:24 +0000"  >&lt;p&gt;Hi Emoly,&lt;br/&gt;
Could you have a look at this one?&lt;br/&gt;
Thanks.&lt;br/&gt;
Joe&lt;/p&gt;</comment>
                            <comment id="131239" author="gerrit" created="Thu, 22 Oct 2015 20:33:24 +0000"  >&lt;p&gt;Olaf Faaland-LLNL (faaland1@llnl.gov) uploaded a new patch: &lt;a href=&quot;http://review.whamcloud.com/16919&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16919&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7297&quot; title=&quot;BUG: spinlock bad magic, probably on oh-&amp;gt;oh_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7297&quot;&gt;&lt;del&gt;LU-7297&lt;/del&gt;&lt;/a&gt; osd-zfs: initialize oh_lock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: 1&lt;br/&gt;
Commit: d186ea11a1fb76d6532f6bf3eaa15105d5e617b0&lt;/p&gt;</comment>
                            <comment id="131241" author="ofaaland" created="Thu, 22 Oct 2015 20:34:40 +0000"  >&lt;p&gt;Somehow I&apos;d overlooked osd-zfs/osd_lproc.c:osd_stats_init().  Patch submitted.&lt;/p&gt;</comment>
                            <comment id="135254" author="gerrit" created="Fri, 4 Dec 2015 17:58:54 +0000"  >&lt;p&gt;Oleg Drokin (oleg.drokin@intel.com) merged in patch &lt;a href=&quot;http://review.whamcloud.com/16919/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://review.whamcloud.com/16919/&lt;/a&gt;&lt;br/&gt;
Subject: &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-7297&quot; title=&quot;BUG: spinlock bad magic, probably on oh-&amp;gt;oh_lock&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-7297&quot;&gt;&lt;del&gt;LU-7297&lt;/del&gt;&lt;/a&gt; osd-zfs: initialize oh_lock&lt;br/&gt;
Project: fs/lustre-release&lt;br/&gt;
Branch: master&lt;br/&gt;
Current Patch Set: &lt;br/&gt;
Commit: f4ea6cd384f152c04c478bf19278130802ad8e67&lt;/p&gt;</comment>
                            <comment id="135261" author="jgmitter" created="Fri, 4 Dec 2015 18:22:52 +0000"  >&lt;p&gt;Landed for 2.8&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzxqgf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>