<!-- 
RSS generated by JIRA (9.4.14#940014-sha1:734e6822bbf0d45eff9af51f82432957f73aa32c) at Sat Feb 10 01:08:13 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Whamcloud Community JIRA</title>
    <link>https://jira.whamcloud.com</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.14</version>
        <build-number>940014</build-number>
        <build-date>05-12-2023</build-date>
    </build-info>


<item>
            <title>[LU-557] llmount.sh: lustre_msghdr_get_flags(): ASSERTION(0) failed: incorrect message magic: 00000000</title>
                <link>https://jira.whamcloud.com/browse/LU-557</link>
                <project id="10000" key="LU">Lustre</project>
                    <description>&lt;p&gt;Commit: adc0fa37a44fce26e4c161176612c3c360a4dfbf&lt;/p&gt;

&lt;p&gt;I was trying to mount Lustre with a separate MGS device on my VM:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;[root@h221f tests]# MGSDEV=/tmp/lustre-mgs ./llmount.sh 
Stopping clients: h221f /mnt/lustre (opts:)
Stopping clients: h221f /mnt/lustre2 (opts:)
Loading modules from /root/lustre-release/lustre/tests/..
debug=0x33f0404
subsystem_debug=0xffb7e3ff
../lnet/lnet/lnet options: &apos;networks=tcp(eth1) accept=all&apos;
gss/krb5 is not supported
quota/lquota options: &apos;hash_lqs_cur_bits=3&apos;
Formatting mgs, mds, osts

   Permanent disk data:
Target:     MGS
Index:      unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:      0x74
              (MGS needs_index first_time update )
Persistent mount opts: user_xattr,errors=remount-ro
Parameters:

formatting backing filesystem ldiskfs on /dev/loop0
	target name  MGS
	4k blocks     50000
	options        -q -O uninit_bg,dir_nlink,huge_file,flex_bg -E lazy_journal_init -F
mkfs_cmd = mke2fs -j -b 4096 -L MGS  -q -O uninit_bg,dir_nlink,huge_file,flex_bg -E lazy_journal_init -F /dev/loop0 50000
Writing CONFIGS/mountdata
Format mds1: /tmp/lustre-mdt1
Format ost1: /tmp/lustre-ost1
Format ost2: /tmp/lustre-ost2
Checking servers environments
Checking clients h221f environments
Loading modules from /root/lustre-release/lustre/tests/..
debug=0x33f0404
subsystem_debug=0xffb7e3ff
gss/krb5 is not supported
Setup mgs, mdt, osts
Starting mgs: -o loop,user_xattr,acl  /tmp/lustre-mgs /mnt/mgs
Read from remote host 192.168.56.4: Connection reset by peer
Connection to 192.168.56.4 closed.
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I&apos;ll keep the crash dump for a few days.  From the crash log:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;Lustre: 2828:0:(debug.c:323:libcfs_debug_str2mask()) You are trying to use a numerical value for the mask - this will be deprecated in a future release.
Lustre: OBD class driver, http://wiki.whamcloud.com/
Lustre:         Lustre Version: 2.0.66
Lustre:         Build Version: ../lustre/scripts--PRISTINE-2.6.18-238.12.1.el5.2943701
Lustre: Lustre LU module (e0eb6020).
Lustre: Added LNI 192.168.56.4@tcp [8/256/0/180]
Lustre: Accept all, port 988
Lustre: Lustre OSC module (e1157ee0).
Lustre: Lustre LOV module (e11f3e40).
init dynlocks cache
ldiskfs created from ext4-2.6-rhel5
Lustre: Lustre client module (e15a5be0).
LDISKFS-fs (loop0): warning: maximal mount count reached, running e2fsck is recommended
LDISKFS-fs (loop0): mounted filesystem with ordered data mode
LDISKFS-fs (loop0): warning: maximal mount count reached, running e2fsck is recommended
LDISKFS-fs (loop0): mounted filesystem with ordered data mode
LDISKFS-fs (loop0): warning: maximal mount count reached, running e2fsck is recommended
LDISKFS-fs (loop0): mounted filesystem with ordered data mode
LDISKFS-fs (loop0): warning: maximal mount count reached, running e2fsck is recommended
LDISKFS-fs (loop0): mounted filesystem with ordered data mode
Lustre: 3578:0:(debug.c:323:libcfs_debug_str2mask()) You are trying to use a numerical value for the mask - this will be deprecated in a future release.
Lustre: 3578:0:(debug.c:323:libcfs_debug_str2mask()) Skipped 1 previous similar message
LDISKFS-fs (loop0): mounted filesystem with ordered data mode
LDISKFS-fs (loop0): mounted filesystem with ordered data mode
Lustre: MGS MGS started
Lustre: 3702:0:(sec.c:1474:sptlrpc_import_sec_adapt()) import MGC192.168.56.4@tcp-&amp;gt;MGC192.168.56.4@tcp_0 netid 90000: select flavor null
Lustre: 3727:0:(ldlm_lib.c:874:target_handle_connect()) MGS: connection from 739be4f1-ebe7-82f6-16d5-337bd19bdfcd@0@lo t0 exp 00000000 cur 1312188896 last 0
LustreError: 3727:0:(pack_generic.c:800:lustre_msghdr_get_flags()) ASSERTION(0) failed: incorrect message magic: 00000000
LustreError: 3727:0:(pack_generic.c:800:lustre_msghdr_get_flags()) LBUG
Pid: 3727, comm: ll_mgs_02

Call Trace:
 [&amp;lt;00000000e0be15b0&amp;gt;] libcfs_debug_dumpstack+0x50/0x70 [libcfs]
 [&amp;lt;00000000e0be1d4d&amp;gt;] lbug_with_loc+0x6d/0xd0 [libcfs]
 [&amp;lt;00000000e0f40a00&amp;gt;] reply_in_callback+0x0/0x850 [ptlrpc]
 [&amp;lt;00000000e0f37e22&amp;gt;] lustre_msghdr_get_flags+0x82/0x90 [ptlrpc]
 [&amp;lt;00000000e0f40dc0&amp;gt;] reply_in_callback+0x3c0/0x850 [ptlrpc]
 [&amp;lt;00000000e1203851&amp;gt;] ldiskfs_mark_iloc_dirty+0x341/0x560 [ldiskfs]
 [&amp;lt;00000000e0f40a00&amp;gt;] reply_in_callback+0x0/0x850 [ptlrpc]
 [&amp;lt;00000000e0f3f367&amp;gt;] ptlrpc_master_callback+0x47/0xa0 [ptlrpc]
 [&amp;lt;00000000e0c33a0a&amp;gt;] lnet_enq_event_locked+0x5a/0xb0 [lnet]
 [&amp;lt;00000000e0c33ad8&amp;gt;] lnet_finalize+0x78/0x200 [lnet]
 [&amp;lt;00000000e0c42fcf&amp;gt;] lolnd_recv+0x5f/0x100 [lnet]
 [&amp;lt;00000000e0c37e09&amp;gt;] lnet_ni_recv+0xf9/0x260 [lnet]
 [&amp;lt;00000000e0c38059&amp;gt;] lnet_recv_put+0xe9/0x130 [lnet]
 [&amp;lt;00000000e0c3e560&amp;gt;] lnet_parse+0x14e0/0x2620 [lnet]
 [&amp;lt;00000000c048ca3d&amp;gt;] dput+0x72/0xed
 [&amp;lt;00000000e0db3baf&amp;gt;] llog_free_handle+0x9f/0x330 [obdclass]
 [&amp;lt;00000000c0490402&amp;gt;] mntput_no_expire+0x11/0x6a
 [&amp;lt;00000000e0b914f5&amp;gt;] pop_ctxt+0xe5/0x320 [lvfs]
 [&amp;lt;00000000e0dcb810&amp;gt;] __llog_ctxt_put+0x20/0x2e0 [obdclass]
 [&amp;lt;00000000e0db5c82&amp;gt;] llog_close+0x72/0x440 [obdclass]
 [&amp;lt;00000000e0c430b1&amp;gt;] lolnd_send+0x41/0x90 [lnet]
 [&amp;lt;00000000e0c37c9b&amp;gt;] lnet_ni_send+0x4b/0xc0 [lnet]
 [&amp;lt;00000000e0c3a04c&amp;gt;] lnet_send+0x1fc/0xd90 [lnet]
 [&amp;lt;00000000e0dcb810&amp;gt;] __llog_ctxt_put+0x20/0x2e0 [obdclass]
 [&amp;lt;00000000e0c40665&amp;gt;] LNetPut+0x565/0xef0 [lnet]
 [&amp;lt;00000000e0f2d764&amp;gt;] ptl_send_buf+0x1f4/0xab0 [ptlrpc]
 [&amp;lt;00000000e0f3ec66&amp;gt;] lustre_msg_set_timeout+0x96/0x110 [ptlrpc]
 [&amp;lt;00000000e0f2e26c&amp;gt;] ptlrpc_send_reply+0x24c/0x8b0 [ptlrpc]
 [&amp;lt;00000000e0ee0874&amp;gt;] target_send_reply+0x94/0x910 [ptlrpc]
 [&amp;lt;00000000e0f3db6c&amp;gt;] lustre_msg_get_conn_cnt+0xfc/0x1e0 [ptlrpc]
 [&amp;lt;00000000e124b51e&amp;gt;] mgs_handle+0x31e/0x1f10 [mgs]
 [&amp;lt;00000000e0f3718c&amp;gt;] lustre_msg_get_opc+0x10c/0x1f0 [ptlrpc]
 [&amp;lt;00000000e0f51b67&amp;gt;] ptlrpc_main+0x1217/0x27b0 [ptlrpc]
 [&amp;lt;00000000c044cf34&amp;gt;] audit_syscall_exit+0x2d4/0x2ea
 [&amp;lt;00000000e0f50950&amp;gt;] ptlrpc_main+0x0/0x27b0 [ptlrpc]
 [&amp;lt;00000000c0405c87&amp;gt;] kernel_thread_helper+0x7/0x10
 &amp;lt;IRQ&amp;gt; 
Kernel panic - not syncing: LBUG
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment>A local CentOS 5 i686 VM, with a separate MGS device.</environment>
        <key id="11426">LU-557</key>
            <summary>llmount.sh: lustre_msghdr_get_flags(): ASSERTION(0) failed: incorrect message magic: 00000000</summary>
                <type id="1" iconUrl="https://jira.whamcloud.com/secure/viewavatar?size=xsmall&amp;avatarId=11303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="4" iconUrl="https://jira.whamcloud.com/images/icons/priorities/minor.svg">Minor</priority>
                        <status id="5" iconUrl="https://jira.whamcloud.com/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="wc-triage">WC Triage</assignee>
                                    <reporter username="liwei">Li Wei</reporter>
                        <labels>
                    </labels>
                <created>Mon, 1 Aug 2011 05:12:54 +0000</created>
                <updated>Thu, 25 Apr 2013 09:34:46 +0000</updated>
                            <resolved>Thu, 25 Apr 2013 09:34:46 +0000</resolved>
                                    <version>Lustre 2.1.0</version>
                                    <fixVersion>Lustre 2.1.0</fixVersion>
                                        <due></due>
                            <votes>0</votes>
                                    <watches>2</watches>
                                                                            <comments>
                            <comment id="18581" author="tappro" created="Mon, 1 Aug 2011 06:17:03 +0000"  >&lt;p&gt;this is duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-539&quot; title=&quot;small size for RMF_CONNECT_DATA caused out of bound memory crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-539&quot;&gt;&lt;del&gt;LU-539&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="57018" author="adilger" created="Thu, 25 Apr 2013 09:34:46 +0000"  >&lt;p&gt;Per Mike&apos;s last comment this is a duplicate of &lt;a href=&quot;https://jira.whamcloud.com/browse/LU-539&quot; title=&quot;small size for RMF_CONNECT_DATA caused out of bound memory crash&quot; class=&quot;issue-link&quot; data-issue-key=&quot;LU-539&quot;&gt;&lt;del&gt;LU-539&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="11402">LU-539</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                            <customfield id="customfield_10890" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10390" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hzvp07:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10090" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>7879</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10060" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Severity</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10022"><![CDATA[3]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        </customfields>
    </item>
</channel>
</rss>